Wednesday, September 26, 2012

MPLS LAB1: LDP Adjacency. Part II.

MPLS LAB1: LDP Adjacency. Part II.

Topics:

  • Targeted LDP session
  • Targeted LDP session hello/holdtime interval modification
  • Targeted Sessions with the accept from ACL.
  • Label Advertisement control outbound,inbound.
  • LDP Session Protection. 
Gear Specs: 

Everything is running over a Dell latitude with 8GB of Ram, and a core i7 2640(2.8ghz) on Linux Mint (Debian edition).

Platform: 4X Dynamips/GNS3 emulated 3750 with 128mb of ram (each) running IOS C3745-ADVENTERPRISEK9-M version 12.4(25d).


Topology:





In the First part of the lab we tested the main ldp adjacency components:

  • LDP adjacencies and verification.
  • Hello/Holdtime interval modification.
  • LDP autoconfig.
  • LDP authentication.
Part I final configs:

P1:

!         
mpls ldp discovery hello interval 10
mpls ldp discovery hello holdtime 30
!
!
!         
!
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
!
interface FastEthernet0/0
 description Link_to_P2
 ip address 10.0.12.1 255.255.255.248
 ip ospf network point-to-point
!
interface Serial0/0
 description 2d_link_to_P2
 ip address 10.100.200.1 255.255.255.252
 no fair-queue
 clock rate 2000000
!
!
!
router ospf 1
 mpls ldp autoconfig area 0
 log-adjacency-changes
 network 0.0.0.0 255.255.255.255 area 0
!

P2:


hostname P2
!
!
!
!
!
mpls ldp discovery hello interval 10
mpls ldp discovery hello holdtime 30
!
!
!
interface Loopback0
 ip address 10.2.2.2 255.255.255.255
!
interface FastEthernet0/0
 description Link_to_P1
 ip address 10.0.12.2 255.255.255.248
 ip ospf network point-to-point
!
interface Serial0/0
 description 2d_link_to_P1
 ip address 10.100.200.2 255.255.255.252
 no fair-queue
 clock rate 2000000
!
interface FastEthernet0/1
 description Link_To_P3
 ip address 10.0.23.2 255.255.255.248
 ip ospf network point-to-point
!
!
!
!
router ospf 1
 mpls ldp autoconfig area 0
 log-adjacency-changes
 network 0.0.0.0 255.255.255.255 area 0
!

P3:

!
hostname P3
!
!
mpls ldp neighbor 10.4.4.4 password cisco123
!
!
!
!
!         
!
!
interface Loopback0
 ip address 10.3.3.3 255.255.255.255
!
interface FastEthernet0/0
 description Link_to_P4
 ip address 10.0.34.3 255.255.255.248
 ip ospf network point-to-point
 mpls ip
!
!
interface FastEthernet0/1
 description Link_To_P2
 ip address 10.0.23.3 255.255.255.248
 ip ospf network point-to-point
 mpls ip
!
!
!
!
router ospf 1
 log-adjacency-changes
 network 0.0.0.0 255.255.255.255 area 0
!

P4:

hostname P4
!
!
!
mpls ldp neighbor 10.3.3.3 password cisco123
!
!
!
!
interface Loopback0
 ip address 10.4.4.4 255.255.255.255
!
interface FastEthernet0/0
 description Link_To_P3
 ip address 10.0.34.4 255.255.255.248
 ip ospf network point-to-point
 mpls ip
!
!
!
router ospf 1
 log-adjacency-changes
 network 0.0.0.0 255.255.255.255 area 0
!

Configuring a Targeted LDP session.

Let`s say we need to establish some LDP peering between P1 and P4 for some obscure TE requirement, as we observe we are not directly connected to R4 , this is were a Targeted LDP session comes in.

The command to configure a targeted LDP session is mpls ldp neighbor "neighbor ID" targeted "ldp/tdp" this is configured on both routers.

So now lets configure an targeted LDP session between P1-P4:

P1(config)#mpls ldp neighbor 10.4.4.4 targeted ldp 

P4(config)#mpls ldp neighbor 10.1.1.1 targeted ldp

After configuring it on both sides we see the notification message indicating that the adjacency is up

*Mar  1 01:23:08.035: %LDP-5-NBRCHG: LDP Neighbor 10.4.4.4:0 (2) is UP

We can verify the adjacency with the normal commands:

P1#sh mpls ldp neighbor 
    Peer LDP Ident: 10.2.2.2:0; Local LDP Ident 10.1.1.1:0
TCP connection: 10.2.2.2.19560 - 10.1.1.1.646
State: Oper; Msgs sent/rcvd: 71/71; Downstream
Up time: 00:52:50
LDP discovery sources:
 Serial0/0, Src IP addr: 10.100.200.2
 FastEthernet0/0, Src IP addr: 10.0.12.2
        Addresses bound to peer LDP Ident:
          10.0.12.2       10.100.200.2    10.0.23.2       10.2.2.2        
    Peer LDP Ident: 10.4.4.4:0; Local LDP Ident 10.1.1.1:0
TCP connection: 10.4.4.4.22964 - 10.1.1.1.646
State: Oper; Msgs sent/rcvd: 11/12; Downstream
Up time: 00:00:48
LDP discovery sources:
 Targeted Hello 10.1.1.1 -> 10.4.4.4, active, passive
        Addresses bound to peer LDP Ident:
          10.0.34.4       10.4.4.4 

P1#show mpls ldp discovery 
 Local LDP Identifier:
    10.1.1.1:0
    Discovery Sources:
    Interfaces:
FastEthernet0/0 (ldp): xmit/recv
   LDP Id: 10.2.2.2:0
Serial0/0 (ldp): xmit/recv
   LDP Id: 10.2.2.2:0
    Targeted Hellos:
10.1.1.1 -> 10.4.4.4 (ldp): active/passive, xmit/recv
   LDP Id: 10.4.4.4:0


We can see the difference between a adjacent session and a targeted one, on both outputs we see the targeted hello discovery.

Targeted LDP session hello/holdtime interval modification.

Now lets say that we need a short hello/holdtime interval for some reason. This is pretty straightforward and similar to modifying the intervals for a adjacent peering. 

To determine the current values:

P1#show mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 30 sec; interval: 10 sec
Discovery targeted hello: holdtime: 90 sec; interval: 10 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off

P4#sh mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 15 sec; interval: 5 sec
Discovery targeted hello: holdtime: 90 sec; interval: 10 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off


We can see that both are using default values (hello = 10, holdtime = 90). We are going to modify this values to the following hello = 5 , holdtime = 15.

P1(config)#mpls ldp discovery targeted-hello interval 5
P1(config)#mpls ldp discovery targeted-hello holdtime 15

P4(config)#mpls ldp discovery targeted-hello interval 5
P4(config)#mpls ldp discovery targeted-hello holdtime 15


Now on to verifying the changes:

P1#sh mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 30 sec; interval: 10 sec
Discovery targeted hello: holdtime: 15 sec; interval: 5 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off


P4#sh mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 15 sec; interval: 5 sec
Discovery targeted hello: holdtime: 15 sec; interval: 5 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off


Verifying targeted adjacency is still up:

P1#sh mpls ldp neighbor 
    Peer LDP Ident: 10.2.2.2:0; Local LDP Ident 10.1.1.1:0
TCP connection: 10.2.2.2.19560 - 10.1.1.1.646
State: Oper; Msgs sent/rcvd: 88/89; Downstream
Up time: 01:08:39
LDP discovery sources:
 Serial0/0, Src IP addr: 10.100.200.2
 FastEthernet0/0, Src IP addr: 10.0.12.2
        Addresses bound to peer LDP Ident:
          10.0.12.2       10.100.200.2    10.0.23.2       10.2.2.2        
    Peer LDP Ident: 10.4.4.4:0; Local LDP Ident 10.1.1.1:0
TCP connection: 10.4.4.4.22964 - 10.1.1.1.646
State: Oper; Msgs sent/rcvd: 30/30; Downstream
Up time: 00:16:37
LDP discovery sources:
 Targeted Hello 10.1.1.1 -> 10.4.4.4, active, passive
        Addresses bound to peer LDP Ident:
          10.0.34.4       10.4.4.4        


P4#sh mpls ldp neighbor 
    Peer LDP Ident: 10.3.3.3:0; Local LDP Ident 10.4.4.4:0
TCP connection: 10.3.3.3.646 - 10.4.4.4.50451
State: Oper; Msgs sent/rcvd: 64/64; Downstream
Up time: 00:46:53
LDP discovery sources:
 FastEthernet0/0, Src IP addr: 10.0.34.3
        Addresses bound to peer LDP Ident:
          10.0.34.3       10.0.23.3       10.3.3.3        
    Peer LDP Ident: 10.1.1.1:0; Local LDP Ident 10.4.4.4:0
TCP connection: 10.1.1.1.646 - 10.4.4.4.22964
State: Oper; Msgs sent/rcvd: 31/31; Downstream
Up time: 00:17:25
LDP discovery sources:
 Targeted Hello 10.4.4.4 -> 10.1.1.1, active, passive
        Addresses bound to peer LDP Ident:
          10.0.12.1       10.100.200.1    10.1.1.1        


Targeted Sessions with the accept from ACL.

Let`s say we have a case on which we cannot configure the mpls ldp targeted command. We can still configure the Targeted session using the mpls ldp discovery targeted-hello accept {from acl}  on the router on which we cannot use the other command.

First we need to unconfigured the targeted session we built.

P4(config)#no mpls ldp neighbor 10.1.1.1 targeted ldp 

Next we`ll configured the accept from command.

P4(config)#access-list 2 permit host 10.1.1.1
P4(config)#no mpls ldp neighbor 10.1.1.1 targeted ldp 
P4(config)#mpls ldp discovery targeted-hello accept from 1

Here we configured an ACL to match only host P1, so the only peer that can built the targeted LDP session is P1. Next is the peering command referencing the previous acl we created.

Verifying:

P4#sh mpls ldp neighbor | begin 10.1.1.1:0
    Peer LDP Ident: 10.1.1.1:0; Local LDP Ident 10.4.4.4:0
TCP connection: 10.1.1.1.646 - 10.4.4.4.22183
State: Oper; Msgs sent/rcvd: 13/13; Downstream
Up time: 00:02:06
LDP discovery sources:
 Targeted Hello 10.4.4.4 -> 10.1.1.1, passive
        Addresses bound to peer LDP Ident:
          10.0.12.1       10.100.200.1    10.1.1.1        


Label Advertisement control outbound,inbound.


Ok for now, let`s say we want to filter some labels from arriving at R4. first we need to see witch labels are we receiving.


P4#sh mpls forwarding-table
Local  Outgoing    Prefix            Bytes tag  Outgoing   Next Hop  
tag    tag or VC   or Tunnel Id      switched   interface            
16     16          10.0.12.0/29      0          Fa0/0      10.0.34.3  
17     Pop tag     10.0.23.0/29      0          Fa0/0      10.0.34.3  
18     17          10.1.1.1/32       0          Fa0/0      10.0.34.3  
19     18          10.2.2.2/32       0          Fa0/0      10.0.34.3  
20     Pop tag     10.3.3.3/32       0          Fa0/0      10.0.34.3  
21     19          10.100.200.0/30   0          Fa0/0      10.0.34.3  

P4#sh mpls ldp bindings     
  tib entry: 10.0.12.0/29, rev 6
local binding:  tag: 16
remote binding: tsr: 10.3.3.3:0, tag: 16
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 10.0.23.0/29, rev 8
local binding:  tag: 17
remote binding: tsr: 10.3.3.3:0, tag: imp-null
remote binding: tsr: 10.1.1.1:0, tag: 16
  tib entry: 10.0.34.0/29, rev 4
local binding:  tag: imp-null
remote binding: tsr: 10.3.3.3:0, tag: imp-null
remote binding: tsr: 10.1.1.1:0, tag: 18
  tib entry: 10.1.1.1/32, rev 10
local binding:  tag: 18
remote binding: tsr: 10.3.3.3:0, tag: 17
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 10.2.2.2/32, rev 12
local binding:  tag: 19
remote binding: tsr: 10.3.3.3:0, tag: 18
remote binding: tsr: 10.1.1.1:0, tag: 17
  tib entry: 10.3.3.3/32, rev 14
local binding:  tag: 20
remote binding: tsr: 10.3.3.3:0, tag: imp-null
        remote binding: tsr: 10.1.1.1:0, tag: 19
  tib entry: 10.4.4.4/32, rev 2
local binding:  tag: imp-null
remote binding: tsr: 10.3.3.3:0, tag: 23
remote binding: tsr: 10.1.1.1:0, tag: 20
  tib entry: 10.100.200.0/30, rev 16
local binding:  tag: 21
remote binding: tsr: 10.3.3.3:0, tag: 19
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 172.16.0.0/16, rev 39
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 172.16.1.0/24, rev 40
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 172.16.1.2/32, rev 18(no route)
local binding:  tag: 22
remote binding: tsr: 10.3.3.3:0, tag: 20
  tib entry: 172.16.70.0/24, rev 41
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 172.16.70.1/32, rev 20(no route)
local binding:  tag: 23
remote binding: tsr: 10.3.3.3:0, tag: 21
  tib entry: 192.168.168.0/24, rev 42
remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 192.168.168.1/32, rev 22(no route)
        local binding:  tag: 24
remote binding: tsr: 10.3.3.3:0, tag: 22



Now , we only want to received(labeled) P3 loopback (10.3.3.3) so we do the following:

P4(config)#access-list 23 permit 10.3.3.0 0.0.0.255

P4(config)#mpls ldp neighbor 10.3.3.0 labels accept 23




Results!, as we wanted we are only received 10.3.3.3 labeled , all the other prefix are untagged , meaning P4 its not accepting labels for the other prefixes.

P4#sh mpls forwarding-table
Local  Outgoing    Prefix            Bytes tag  Outgoing   Next Hop  
tag    tag or VC   or Tunnel Id      switched   interface            
16     Untagged    10.0.12.0/29      0          Fa0/0      10.0.34.3  
17     Untagged    10.0.23.0/29      0          Fa0/0      10.0.34.3  
18     Untagged    10.1.1.1/32       0          Fa0/0      10.0.34.3  
19     Untagged    10.2.2.2/32       0          Fa0/0      10.0.34.3  
20     Pop tag     10.3.3.3/32       0          Fa0/0      10.0.34.3  
21     Untagged    10.100.200.0/30   0          Fa0/0      10.0.34.3  

Now lets say that we need to filter all the labels regarding each router loopback, that means we want to see those prefix unlabeled on the whole network, and just only the p2p points being labeled (for some obscure reason).

P1-P2-P3-P4


P2(config)# access-list 1 deny host 10.1.1.1
P2(config)# access-list 1 deny host 10.2.2.2
P2(config)# access-list 1 deny host 10.3.3.3
P2(config)# access-list 1 deny host 10.4.4.4
P2(config)# access-list 1 permit any
P2(config)#
P2(config)#no mpls ldp advertise-label
P2(config)# mpls ldp advertise-label for 1

Note" For the mpls ldp advertise label for 1 to work, we need to stop the global ldp label advertisement with the no mpld ldp advertise-label"


Verifying


P1#show mpls forwarding-table
Local  Outgoing    Prefix            Bytes tag  Outgoing   Next Hop  
tag    tag or VC   or Tunnel Id      switched   interface            
16     Untagged    10.2.2.2/32       0          Fa0/0      10.0.12.2  
17     Pop tag     10.0.23.0/29      0          Fa0/0      10.0.12.2  
18     17          10.0.34.0/29      0          Fa0/0      10.0.12.2  
19     Untagged    10.3.3.3/32       0          Fa0/0      10.0.12.2  
20     Untagged    10.4.4.4/32       0          Fa0/0      10.0.12.2  


P2#sh mpls forwarding-table 
Local  Outgoing    Prefix            Bytes tag  Outgoing   Next Hop    
tag    tag or VC   or Tunnel Id      switched   interface              
16     Untagged    10.1.1.1/32       0          Fa0/0      10.0.12.1    
17     Pop tag     10.0.34.0/29      0          Fa0/1      10.0.23.3    
18     Untagged    10.3.3.3/32       0          Fa0/1      10.0.23.3    
19     Untagged    10.4.4.4/32       0          Fa0/1      10.0.23.3    



P3#sh mpls forwarding-table
Local  Outgoing    Prefix            Bytes tag  Outgoing   Next Hop  
tag    tag or VC   or Tunnel Id      switched   interface            
16     Pop tag     10.0.12.0/29      0          Fa0/1      10.0.23.2  
17     Untagged    10.1.1.1/32       0          Fa0/1      10.0.23.2  
18     Untagged    10.2.2.2/32       0          Fa0/1      10.0.23.2  
19     Untagged    10.4.4.4/32       0          Fa0/0      10.0.34.4  
20     Pop tag     10.100.200.0/30   0          Fa0/1      10.0.23.2  


P4#sh mpls forwarding-table
Local  Outgoing    Prefix            Bytes tag  Outgoing   Next Hop  
tag    tag or VC   or Tunnel Id      switched   interface            
16     16          10.0.12.0/29      0          Fa0/0      10.0.34.3  
17     Pop tag     10.0.23.0/29      0          Fa0/0      10.0.34.3  
18     Untagged    10.1.1.1/32       0          Fa0/0      10.0.34.3  
19     Untagged    10.2.2.2/32       0          Fa0/0      10.0.34.3  
20     Untagged    10.3.3.3/32       0          Fa0/0      10.0.34.3  
21     20          10.100.200.0/30   0          Fa0/0      10.0.34.3

As we can see all the loopbacks of each router is now un-labeled , and only the interface links are being labeled.


LDP session Protection.

Now we discover that we have a link-flapping problem between P1 and P2, and LDP sessions are being time-out from time to time. We decided to protect the session with the LDP session protection feature. The LDP session protection works by creating a targeted LDP session with the peer along a different link/path , so when the link fails the ldp session stays up as long as the alternate path still works.

P1

P1(config)#access-list 3 permit host 10.2.2.2
P1(config)#mpls ldp session protection for 3

After configuring this we can test that the ldp session stays up when the f0/0 link goes down between P1 and P2


P2(config)#int f0/0
P2(config-if)#sh
P2(config-if)#shutdown
P2(config-if)#

After this we verify that the LDP session is still up between P1 and P2

P1#show mpls ldp neighbor 
    Peer LDP Ident: 10.2.2.2:0; Local LDP Ident 10.1.1.1:0
TCP connection: 10.2.2.2.55965 - 10.1.1.1.646
State: Oper; Msgs sent/rcvd: 52/59; Downstream
Up time: 00:26:56
LDP discovery sources:
 Serial0/0, Src IP addr: 10.100.200.2
        Addresses bound to peer LDP Ident:
          10.2.2.2        10.100.200.2    10.0.23.2       



P1#sh mpls ldp discovery
 Local LDP Identifier:
    10.1.1.1:0
    Discovery Sources:
    Interfaces:
FastEthernet0/0 (ldp): xmit
Serial0/0 (ldp): xmit/recv
   LDP Id: 10.2.2.2:0
    Targeted Hellos:
10.1.1.1 -> 10.2.2.2 (ldp): active, xmit



As we can tell the session is still up, and its being discovery over the serial connection , so effectively the flapping or link going down, does not affect our peering with P2.



















Friday, September 21, 2012

MPLS LAB1: LDP Adjacency. Part I.

MPLS LAB1: LDP Adjacency. Part I. 

Topics:



  • LDP adjacencies and verification
  • Hello/Holdtime interval modification.
  • LDP autoconfig
  • LDP authentication

Gear Specs:

Everything is run over a Dell Latitude with 8GB of Ram, and a Core i7 2640 (2.8ghz).

Platform: 4x Dynamips/GNS3 Emulated Cisco 3745 with 128mb of ram

Topology:



"The serial link will only be used on the second part of the Lab for the IGP-Sync proof of concept."


Initial Configs:

P1:

!
!         
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
!
interface FastEthernet0/0
 description Link_to_P2
 ip address 10.0.12.1 255.255.255.248
 ip ospf network point-to-point
 duplex auto
 speed auto
!
interface Serial0/0
 description 2d_link_to_P2
 ip address 10.100.200.1 255.255.255.252
 no fair-queue
 clock rate 2000000
!        
!
!

P2:

!         
!
interface Loopback0
 ip address 10.2.2.2 255.255.255.255
!
interface FastEthernet0/0
 description Link_to_P1
 ip address 10.0.12.2 255.255.255.248
 ip ospf network point-to-point
 duplex auto
 speed auto
!
interface Serial0/0
 description 2d_link_to_P1
 ip address 10.100.200.2 255.255.255.252
 no fair-queue
 clock rate 2000000
!
interface FastEthernet0/1
 description Link_To_P3
 ip address 10.0.23.2 255.255.255.248
 ip ospf network point-to-point
 duplex auto
 speed auto
!
!

P3:

!         
!
interface Loopback0
 ip address 10.3.3.3 255.255.255.255
!
interface FastEthernet0/0
 description Link_to_P4
 ip address 10.0.34.3 255.255.255.248
 ip ospf network point-to-point
 duplex auto
 speed auto
!
!
interface FastEthernet0/1
 description Link_To_P2
 ip address 10.0.23.3 255.255.255.248
 ip ospf network point-to-point
 duplex auto
 speed auto
!

P4:

!
!         
!
interface Loopback0
 ip address 10.4.4.4 255.255.255.255
!
interface FastEthernet0/0
 description Link_To_P3
 ip address 10.0.34.4 255.255.255.248
 ip ospf network point-to-point
 duplex auto
 speed auto
!
!
!


Configuring OSPF:

For all the P routers to know every link address we need to run some Routing protocol above it. In this case we`ll use OSPF.

On all routers

!
router ospf 1
 log-adjacency-changes
 network 0.0.0.0 255.255.255.255 area 0
!


Enabling LDP:

Now we are going to enable LDP , first we`ll test the LDP autoconfig command. Enabling autoconfig should enable ldp on every interface participating in the IGP proccess (OSPF). Autoconfig will be configured on P1 and P2 . P3 and P4 are going to be configured with the interface command mpls ip. This are the main two approaches to configure a LDP adjacency between neighbors.

P1 and P2

!
router ospf 1
 mpls ldp autoconfig area 0
!
!

P3

!
interface FastEthernet0/0
 mpls ip
interface FastEthernet0/1
 mpls ip
!

P4

!
interface FastEthernet0/0
 mpls ip

Verifying on which interfaces is MPLS running:


P1#sh mpls interfaces
Interface              IP            Tunnel   Operational
FastEthernet0/0        Yes (ldp)     No       Yes        
Serial0/0              Yes (ldp)     No       Yes        

P2#sh mpls interfaces 
Interface              IP            Tunnel   Operational
FastEthernet0/0        Yes (ldp)     No       Yes         
FastEthernet0/1        Yes (ldp)     No       Yes         
Serial0/0              Yes (ldp)     No       Yes         

P3#sh mpls interfaces 
Interface              IP            Tunnel   Operational
FastEthernet0/0        Yes (ldp)     No       Yes         
FastEthernet0/1        Yes (ldp)     No       Yes         

P4#sh mpls interfaces 
Interface              IP            Tunnel   Operational
FastEthernet0/0        Yes (ldp)     No       Yes        



After configuring  we should see the following message on each neighbor indicating the establishment of the adjacency 

P1#
*Mar  1 08:43:09.889: %LDP-5-NBRCHG: LDP Neighbor 10.2.2.2:0 (1) is UP

We can verify the neighbors adjacency with the following commands

P1#sh mpls ldp neighbor 
    Peer LDP Ident: 10.2.2.2:0; Local LDP Ident 10.1.1.1:0
TCP connection: 10.2.2.2.32135 - 10.1.1.1.646
State: Oper; Msgs sent/rcvd: 20/20; Downstream
Up time: 00:08:02
LDP discovery sources:
 Serial0/0, Src IP addr: 10.100.200.2
 FastEthernet0/0, Src IP addr: 10.0.12.2
        Addresses bound to peer LDP Ident:
          10.0.12.2       10.100.200.2    10.0.23.2       10.2.2.2        


P3#sh mpls ldp neighbor 
    Peer LDP Ident: 10.2.2.2:0; Local LDP Ident 10.3.3.3:0
TCP connection: 10.2.2.2.646 - 10.3.3.3.23650
State: Oper; Msgs sent/rcvd: 20/20; Downstream
Up time: 00:08:39
LDP discovery sources:
 FastEthernet0/1, Src IP addr: 10.0.23.2
        Addresses bound to peer LDP Ident:
          10.0.12.2       10.100.200.2    10.0.23.2       10.2.2.2        
    Peer LDP Ident: 10.4.4.4:0; Local LDP Ident 10.3.3.3:0
TCP connection: 10.4.4.4.64168 - 10.3.3.3.646
State: Oper; Msgs sent/rcvd: 20/20; Downstream
Up time: 00:08:27
LDP discovery sources:
 FastEthernet0/0, Src IP addr: 10.0.34.4
        Addresses bound to peer LDP Ident:
          10.0.34.4       10.4.4.4        

We can also use the following command show mpls ldp discovery to see which neighbors are being discovered on the enabled interfaces.

P4#sh mpls ldp discovery 
 Local LDP Identifier:
    10.4.4.4:0
    Discovery Sources:
    Interfaces:
FastEthernet0/0 (ldp): xmit/recv
   LDP Id: 10.3.3.3:0


P3#sh mpls ldp discovery 
 Local LDP Identifier:
    10.3.3.3:0
    Discovery Sources:
    Interfaces:
FastEthernet0/0 (ldp): xmit/recv
   LDP Id: 10.4.4.4:0
FastEthernet0/1 (ldp): xmit/recv
   LDP Id: 10.2.2.2:0



Modifying Hello/holtime interval on the LDP adjacency.

First of all we need to verify current values , this is achieved with the following command 

P1#sh mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 15 sec; interval: 5 sec
Discovery targeted hello: holdtime: 90 sec; interval: 10 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off

Here we can determine the current values of the hello and holdtime , included the targeted hello and holdtime. currently the values are 15 sec holdtime and hello interval 5 secs. For testing lets modify the current values for the following Hello: 10 and holdtime: 30 on P1 and P2.

P1 and P2:

!
!
mpls ldp discovery hello interval 10
mpls ldp discovery hello holdtime 30
!
!

to verify the changes:

P1#sh mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 30 sec; interval: 10 sec
Discovery targeted hello: holdtime: 90 sec; interval: 10 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off

P2#sh mpls ldp parameters 
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 30 sec; interval: 10 sec
Discovery targeted hello: holdtime: 90 sec; interval: 10 sec
Downstream on Demand max hop count: 255
Downstream on Demand Path Vector Limit: 255
LDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off

Let us configure LDP authentication Between P3 and P4.

P3: 

!
mpls ldp neighbor 10.4.4.4 password cisco123
!

P4:

!
mpls ldp neighbor 10.3.3.3 password cisco123
!



After configuring the mpls ldp neighbor command we should get the following message.

*Mar  1 09:26:28.665: %LDP-5-NBRCHG: LDP Neighbor 10.4.4.4:0 (2) is DOWN (Session's MD5 password changed)

When the authentication is configured only on one peer we get the following message

 *Mar  1 09:26:34.389: %TCP-6-BADAUTH: No MD5 digest from 10.4.4.4(42183) to 10.3.3.3(646)

After configuring on both ends the LDP adjacency comes back.

P4#
*Mar  1 09:32:21.933: %LDP-5-NBRCHG: LDP Neighbor 10.3.3.3:0 (1) is UP

In the next part we`ll continue with the following topics

  • Targeted LDP session.
  • Targeted Hello/holdtime interval modification.
  • Targeted Sessions with the accept from ACL.
  • Advertisement control, outbound, inbound.
  • LDP session Protection.






Wednesday, September 19, 2012

MPLS Fundamentals Review (Chapter IV)


MPLS Fundamentals Chapter 4

Label Distribution Protocol.

LDP Overview

To get packets across a label switched path (LSP) through the MPLS network, all LSRs must run a label distribution protocol and exchange label bindings. When all the LSRs have the labels for a particular Forwarding Equivalence Class (FEC), the packets can be forwarded on the LSP by means of label switching the packets at each LSR. Te LFIB which is the table tat forwards labeled packets, is fed by the label bindings found in the LIB. The LIB is fed by the label bindings received by LDP, RSVP, MP-BGP, or statically assigned label bindings. Because RSVP distributes the labels only for MPLS traffic engineering and MP-BGP distributes the labels only for BGP routes, you are left with LDP for distributing all the labels for interior routes. Therefore , all directly connected LSRs must establish an LDP session between them. The LDP peers exchange the label mapping messages across this LDP session.

LDP major functions:


  • The discovery of LSRs that are running LDP
  • Session establishment and maintenance
  • Advertising of label mappings
  • Housekeeping by means of notification.



  • Two LSR running LDP discover each other by means of hello messages. 
  • The second step is for them to establish a session across a TCP connection.
  • Across this TCP connection, LDP advertises the label mapping messages between the two LDP peers.
  • These label mapping messages are used to advertise, change, or retract label bindings.
  • LDP notify the LDP neighbor of some advisory and error messages by sending notification messages.


LDP Operation


  • LSRs running LDP send LDP hello messages on all links that are LDP enabled.
  • These are all the interfaces configured with the mpls ip command.
  • LDP hello messages are UDP messages that are sent on the links to the "all routers on this subnet" multicast IP address 224.0.0.2
  • The UDP port used for LDP is 646.
  • The hello message contains a Hold Time. If no Hello message is received from that LSR before the Hold time expires, the LSR removes that LSR from the list of discovered LDP neighbors.
  • To discover whether the LSR sends and receives LDP hellos, the  hello interval, and the Hold time, use the show mpls ldp discovery [detail] command.
  • If LDP hello messages are sent and received on an interface, there is an LDP adjacency across the link between two LSRs that are running LDP.
  • the show mpls interfaces command allows you to quickly see which interfaces are running LDP.
  • To change the interval between sending hello messages or to change the LDP Hold time, you can use the command mpls ldp discovery {hello {holdtime | interval } seconds.
  • The default value for the holdtime is 15 seconds, for the hello is 5 seconds.
  • If two LDP peers have different LDP Hold times configured the smaller of the two values is used as the Hold time for that LDP discovery source.
  • If the Hold time expires for one link, that link is removed from the LDP discovery sources list. If the last link from the LDP discovery sources is removed for one LDP neighbor, the LDP session is torn down.


LSRs that are running LDP have an LDP Identifier, or LDP ID. This LDP ID is a 6-byte field that consists of 4 bytes identifying the LSR uniquely and 2 bytes identifying the label space that the LSR is using. If the last two bytes are 0, the label space is the platform-wide or per-platform label space. If they are non-zero, a per-interface label space is used. If that is the case, multiple LDP IDs are used, where the first 4 bytes are the same value, but the last two bytes indicate a different label space.

The first 4 bytes of the LDP ID are an IP address taken from an operational interface on the router. If loopback interfaces exist, the highest IP address of the loopback interfaces is taken for the LDP ID or LDP router ID. If no loopback interfaces exist, the highest IP address of an interface is taken. You can change the LDP router ID manually by using the command mpls ldp router-id interface [force]. If you use the force keyword, the LDP router ID is changed immediately. Without this keyword, the LDP router ID is changed only the next time it is necessary to select the router ID. This happens when the interface that determines the current LDP router ID is shut down.

In Cisco IOS, the MPLS LDP router ID needs to be present in the routing table of the LDP neighboring routers. If it is not, the LDP session is not formed. Therefore, the IP address that is the LDP router ID on the router must be included in the routing protocol of the LSR.



LDP Session Establishment and Maintenance

If two LSRs have discovered each other by means of the LDP hellos. One LSR tries to open a TCP connection to TCP port 646 0 to the other LSR. If the TCP connection is set up, both LSRs negotiate LDP session parameters by exchanging LDP initialization messages. This parameters include such things as:


  • Timer Values
  • Label distribution method
  • Virtual Path identifier (VPI)/virtual channel identifer (VCI) ranges for Label Controlled ATM (LC-ATM).
  • Data-link connection identifier (DLCI) ranges for LC-Frame Relay.


If the LDP peers agree on the session parameters, they keep the TCP connection between them. If not, they retry to create the LDP session between them, but at a throttled rate.

the command mpls ldp backoff initial-backoff maximum-backoff control this throttling rate.

This command slows down the LDP session setup attempts of two LDP LSRs, when the two neighboring LDP peers are incompatible in terms of the parameters they exchange. If the session setup attempt fails, the next attempts are undertaken at an exponentially increased time, until the maximum backoff time is reached.

The initial-backoff parameter is a value between 5 and 2,147,483, with a default of 15 seconds. The maximum-backoff is a value between 5 and 2,147,483, with a default of 120 seconds.

After the LDP session has been set up, it is maintained by either the receipt of LDP packets or a periodic keepalive message. Each time the LDP peer receives an LDP packet or keepalive message, the keepalive timer is reset for that peer.

The command to change the LDP session keepalive timer is mpls ldp holdtime seconds.

The LDP session is a TCP connection that is established between two IP addresses of the LSRs. Usually these IP addresses are used to created the LDP router Identifier on each router. To change the IP addres, configure the command mpls ldp discovery transport-address {interface | ip-address} on the interface of the router and specify an interface or IP address to be used to create the LDP session.

Number of LDP Sessions

When the per-platform label space is the only label space used between a pair of LSRs, one LDP session suffices. This is so because only one set of label bindings is exchanged between the two LSRs, no matter how many links are between them.

With per-interface label space, each label binding have relevance only to that interface. Therefore, for each interface that has a per-interface label space, one LDP session must exist between the pair of routers.

For all frame-mode links, only one LDP session should exchange the labels in per-platform label space. For each LC-ATM link, an LDP session should exchange the labels in the per-interface label space.

Advertising of Label Mappings

There are three different modes in which the LSRs can behave: advertisement, label retention, and LSP control mode. Each of the three modes has two possibilities, which lead to the following six modes:


  • Unsolicited Downstream (UD) Versus Downstream-On-Demand (DoD) advertisement mode.
  • Liberal Label retention (LLR) versus Conservative Label Retention (CLR) mode
  • Independent LSP control versus Ordered LSP control mode.


In UD advertisement mode, the LDP peer distributes the label bindings unsolicited to its LDP peers. The label bindings are a set of (LDP Identifier, label) per prefix. An LDP router receives multiple label bindings for each prefix namely, one per LDP peer. All these label bindings are stored in the LIB of the router. However , only one LDP peer is the downstream LSR for that particular prefix.

The downstream LSR is found by looking up the next hop for that prefix in the routing table. Only the remote binding associated with that next-hop LSR should be used to populate the LFIB. This means that only one label from all the advertised label bindings from all the LDP neighbors of this LSR should be used as outgoing label in the LFIB for that prefix.

The problem is that the label bindings are advertised as (LDP Identifier, label) without the IP addresses of the interfaces. This means that to find the outgoing label for a particular prefix, you must map to the LDP Identifier the IP address of the interface , pointing back to this LSR on the downstream LSR. You can only do this if each LDP peer advertises all its IP addresses.

These IP addresses are advertised by the LDP peer with Address Messages and withdrawn with Withdraw address messages. You can fin these addresses when you are looking at the LDP peer. They are called the bound addresses for the LDP peer.

Each LSR assigns one local label to each IGP prefix in the routing table. This is the local label binding. These local bindings are stored in the LIB on the router. Each of these labels and the prefixes they are assigned to are advertised via LDP to all the LDP peers. These label bindings are the remote bindings on the LDP peers and are stored in the LIB.

The concept of split horizon does not exist; an LDP peer assigns its own local label to a prefix and advertises that back to the other LDP peer, even though that other LDP peer owns the prefix (it is a connected prefix) or that other LDP peer is the downstream LSR.

Label Withdrawing

When an LDP peer advertises a label binding, the receiving LDP peers keep it until the LDP session goes down or until the label is withdrawn. The label might be withdrawn if the local label changes. The local label might change if, for example, the interface with a certain prefix on it goes down, but another LSR still advertises the prefix. Therefore, the local label for that prefix changes from implicit NULL to a non-reserved label.  If this happens, the implicit NULL label is immediately withdrawn by sending a Label Withdraw message to the LDP peers. The new label is advertised in  a Label Mapping message.

In older Cisco IOS software (pre 12.0(21)ST), the default behavior was not to send a Label Withdraw message to withdraw the label before advertising the new label for the FEC. The new label advertisement was also an implicit label withdraw.

The command mpls ldp neighbor neighbor implicit-withdraw is used to keep the old behavior.

House Keeping by Means of Notification

Notification messages are needed for the housekeeping of LDP sessions.

The following events can be signaled by sending notification messages:


  • Malformed protocol data unit (PDU) or message
  • Unknown or malformed type-length-value (TLV).
  • Session keepalive timer expiration.
  • Unilateral session shutdown
  • Initialization message events
  • Events resulting from other messages
  • Internals errors
  • Loop detection
  • Miscellaneous events.


Targeted LDP Session.

Normally, LDP sessions are set up between directly connected LSRs. However, in som cases a remote or targeted LDP session is needed. This is an LDP session between LSRs that are not directly connected.

Examples in which the targeted LDP session is needed are AToM networks and TE tunnels in an MPLS VPN network. In the case of AToM, an LDP session must exist between each pair of PE routers. In the case of TE tunnels in an MPLS VPN network, with the TE tunnels ending on a P router, the head-end and the tail-end LSR of the TE tunnel need a targeted LDP session between them to get the MPLS VPN traffic correctly label-switched through the MPLS VPN network.

For LDP neighbors that are not directly connected, the LDP neighborship needs to be configured manually on both routers with the mpls ldp neighbor targeted command.

To change the LDP hello interval and the Hold time for targeted LDP sessions, you can use the following command:

mpls ldp discovery {hello {holdtime | interval } seconds | targeted-hello {holdtime | interval} seconds | accept [from acl]}

Another way of achieving the same result (targeted session) is to configure the targeted LDP neighbor on one router only and to configure the other router to accept targeted LDP sessions from specific LDP routers.

The command to configure this is the mpls ldp discovery targeted-hello accept [from acl]. To prevent just any router from setting up an LDP session with this router, you can use the command with an access list so that you can specify which routers are allowed to set up a targeted LDP session.


LDP authentication 

LDP session are TCP sessions. To protect this sessions you can use Message Digest 5 (MD5) authentication. MD5 adds a signature, called the MD5 digest to the TCP segments. The MD5 digest is calculated for the particular TCP segment using the configured password on both ends of the connection. The configured MD5 password is never transmitted.

the command used to configured MD5 for LDP is mpls ldp neighbor [vrf vpn-name] ip-addr password [0-7] pswd-string

If one LSR has MD5 configured for LDP and the other not, the following message is logged:

%TCP-6-BADAUTH: No MD5 digest from 172.16.20.1(11092) to 172.16.20.2(646)

If both LDP peers have a password configured for MD5 but the password do not match, the following message is logged:

%TCP-6-BADAUTH: Invalid MD5 digest from 10.200.254.4(11093) to 10.200.254.3(646)


Controlling the Advertisement of Labels via LDP.

You can configure LDP to advertise or not to advertise certain labels to certain LDP peers. You can use the locally assigned labels that are advertised to the LDP peers as outgoing label on those LSRs. The command syntax is as follows:

mpls ldp advertise-labels [vrf vpn-name] [interface interface | for prefix-access-list [ to peer-access-list] ]

You cannot control the LDP advertisement of labels for LC-ATM networks with LDP deployed with the mpls ldp advertise-labels command.

that is because LC-ATM networks use DoD instead of UD label advertisement mode. DoD has its own command to limit LDP label advertisement. The command mpls ldp request-labels is used instead of mpls ldp advertise-labels for LC-ATM interfaces.

"Do not forget to configure no mpls ldp advertise-labels, too. If you forget this command and only configure the mpls ldp advertise labels for prefix-access-list to peer-access-list command the LSR still sends labels for all prefixes via LDP".

The cisco IOS LDP implementation allows you to specify more than one mpls ldp advertise-labels for prefix-access-list to peer-access-list command.

MPLS LDP Inbound Label Binding Filtering.

You can use the Inbound label binding filtering on the receiving LDP peer if you cannot apply the outbound filtering of label bindings. For instance, you can filter out all received label bindings from the LDP peers, except for the label bindings of the loopback interfaces of PE routers in an MPLS VPN network. Usually these loopbacks interfaces have the BGP next-hop IP addresses, and the LSRs can use the label associated with that prefix to forward the labeled customer VPN traffic.

The command mpls ldp neighbor [vrf vpn-name] nbr-address labels accept acl

LDP Autoconfiguration

Easier than configuring mpls ip on every interface separately is enabling LDP Autoconfiguration for the IGP. Every interface on which the IGP is running then has LDP enabled.

The OSPF router command to enable LDP autoconfiguration is:

mpls ldp autoconfig [are area-id]

You can disable it from specific interfaces if you want to. The interface command to disable LDP autonconfiguration on an interface is a follows:

no mpls ldp igp autoconfig

MPLS LDP-IGP Synchronization.

A problem with MPLS networks is that LDP and the IGP of the network are not synchronized. Synchronization means that the packet forwarding out of an interface happens only if both the IGP and LDP agree that htis is the outgoing link to be used. A common problem with MPLS networks that are running LDP is that when the LDP session is broken on a link , the IGP still has that link as outgoing; thus , packets are still forwarded out of that link. This happens because the IGP installs the best path in the routing table for any prefix. Therefore, traffic for prefixes with a next hop out of a link where LDP is broken becomes unlabeled.

This is a problem for more than just the IPv4-Over-MPLS case. WIth MPLS VPN, AToM, Virtual Private LAN Switching (VPLS), or IPv6 over MPLS, the packets must not become unlabeled in the MPLS network.

One LDP session being down while the IGP adjacency is up between two LSRs can result in major problems because much traffic can be lost.

The same problem can occur when LSRs restart. The IGP can be quicker in establishing the adjacencies than LDP can establish its sessions. This means that the IGP forwarding is already happening before the LFIB has the necessary information to start the correct label forwarding. The packets are incorrectly forwarded (unlabeled) or dropped until the LDP session is established.

The solution is MPLS LDP-IGP Synchronization. This features ensures that the link is not used to forward (unlabeled) traffic when the LDP session across the link is down. Rather the traffic is forwarded out another link where the LDP session is still established.


How MPLS LDP-IGP Synchronization Works

When the MPLS LDP-IGP synchronization is active for an interface, the IGP announces that link with maximum metric until the synchronization is achieved , or until the LDP session is running across that interface.

The Maximum link metric for OSPF is 65536 (hex 0xFFFF). No path through the interface where LDP  is down is used unless it is the only path. After the LDP session is established and label bindings have been exchanged, the igp advertises this link with is normal igp metric.

Basically, OSPG does not form an adjacency across a link if the LDP session is not established first across that link. OSPF does not send out hellos on the link.

Until the LDP session is established or until the sync holddown timer has expired, the OSPF adjacency is not established. Synchronized her means that the local label bindings have been sent over the LDP session to the LDP peer. However, when the syn is turned on at router A and that router has only one link to router B and no other IP connectivity to router B via another path ( tis means via other routers), the OSPF adjacency never comes up. OSPF waits for the LDP session to come up, but the LDP session cannot come up because router A cannot have the router for the LDP router ID of router B in its routing table. The OSPF and LDP adjacency can stay down forever in this situation. If router A has only router B as a neighbor, the LDP router ID of router B is not reachable; this means that no route exist for it in the routing table of router A.

In that case, the LDP-IGP sync detects that the peer is not reachable and lest OSPF bring up the adjacency anyway.

In some cases, the problem with the LDP session might be a persistent one; therefore, it might not be desirable to keep waiting for the IGP adjacency to be established.  The solution for this is to configure a Hold down timer for the sync. If the timer expires before the LDP session is established, the OSPF adjacency is built anyway.

MPLS LDP-IGP Sync configuration.

MPLS LDP-IGP sync is enabled for the IGP process.

The command to enable it for the IGP is mpls ldp sync and it is configured under the router process.

You can disable MPLS LDP-IGP Sync on one particular interface with the command no mpls ldp igp sync.

If Sync is not achieved , the IGP waits indefinitely to bring up the adjacency. You can change this with the global command mpls ldp igp sync holddown msecs. Which instructs the IGP to wait only for the configured time.

When OSPF is waiting for LDP to synchronize, it says "Interface is down and pending LDP."

When the OSPF adjacency is up but the LDP session is not, OSPF says "interface is up and sending maximum metric."

MPLS LDP Session Protection.

A common problem in networks is flapping links. Because the IGP adjacency and the LDP session are running across the link, they go down when the link goes down. The impact is pretty severe though, because the routing protocol and LDP can take time to rebuild the neighborship. LDP has to rebuild the LDP session and must exchange the label bindings again. To avoid having to rebuild the LDP session altogether, you can protect it.

When the LDP session between two directly connected LSRs is protected, a targeted LDP session is built between the two LSRs. When the directly connected link does go down between the two LSRs, the targeted LDP session is kept up as long as an alternative path exists between the two LSRs. The LDP link adjacency is removed whrn the link goes down, but the targeted adjacency keeps the LSP session up.

The global command to enable LDP Session Protection is this:

mpls ldp session protection [vrf vpn-name] [for acl] [duration seconds]

The access list (acl) you can configure lets you specify the LDP peers that should be protected. It should hold the LDP router identifier of the LDP neighbors that need protection. The duration is the time that the protection (the targeted LDP session) should remain in place after the LDP link adjacency has gone down. The default value is infinite.

For the protection to work, you need to enable it on both the LSRs. If this is not possible, you can enable it on one lSR, and the other LSR can accept the targeted LDP hellos by configuring the command mpls ldp discovery targeted-hello accept.

Other Features

A useful feature is LDP Graceful Restart. It specifies a mechanism for LDP peers to preserve the MPLS forwarding state when the LDP session goes down.














Monday, September 17, 2012

MPLS Fundamentals Review (Chapter III).

MPLS Fundamentals Chapter 3

Forwarding Labeled Packets.

Label Operations.

The possible label operations are swap,push, and pop.


By looking at the top label of the received labeled packet and the corresponding entry in the LFIB, the LSR knows how to forward the packet. The LSR determines what label operation needs to be performed ,and what the next hop is to which the packet needs to be forwarded.

IP Lookup Versus Label Lookup

When a router receives an IP packet, the lookup done is an IP lookup. In Cisco IOS, this means that the packet is looked up in the CEF table. When a router receives a labeled packet, the lookup is done in the LFIB of the router. The router knows that it receives a labeled packet or an IP packet
by looking at the protocol field in the Layer 2 header.


If an ingress LSR receives an IP packet and forwards it as labeled, it is called the IP-to-Label forwarding case.

If an LSR receives a labeled packet, it can strip off the labels and forward it as an IP packet, or it can forward it as a labeled packet. The first case referred to as the label-to-IP forwarding case; the second is referred to as the label-to-label forwarding case.


In Cisco IOS, CEF switching is the only IP switching mode that you can use to label packets. Other IP switching modes , such as fast switching, cannot be used, because the fast switching cache does not hold information on labels.

To see all the labels that change on an already labeled packet, you must use the show mpls forwarding-table [network {mask|lenght}] [detail] command.

If the detail keyword is specified, you can see all the labels that change in the label stack. Without the detail keyword, you see only the pushed label.


When you perform an aggregation (or summarization) on an LSR, it advertises a specific label for the aggregated prefix, but the outgoing label in the LFIB shows "Aggregate". Because this LSR is aggregating a range prefixes, it cannot forward an incoming labeled packet by label-swapping the top label. The outgoing label entry showing "Aggregate" means that the aggregating LSR needs to remove the label of the incoming packet and must do an IP lookup to determine the more specific prefix to use for forwarding this IP packet.

You know now how the labeled packet is forwarded to a specific next hop after a label operation. The CEF adjacency table, however , determines the outgoing data link encapsulation. The adjacency table provides the necessary Layer 2 information to forward the packet to the next-hop LSR.


Label Operations Recap:

  • Pop: The top label is removed. The packet is forwarded with the remaining label stack or as an unlabeled packet.
  • Swap: The top label is removed and replaced with a new label.
  • Push: The top label is replaced with a new label (swapped), and one or more labels are added (pushed) on top of the swapped label.
  • Untagged/No label: The stack is removed, and the packet is forwarded unlabeled.
  • Aggregate: The label stack is removed, and an IP lookup is done on the IP packet.


Load Balancing Labeled Packets.

If multiple equal-cost paths exist for an IPv4 prefix, the Cisco IOS can load-balance labeled packets.

If labeled packets are load-balanced, they can have the same outgoing labels, but they can also be different. The outgoing labels are the same if the two links are between a pair of routers and both links belong to the platform label space. If multiple next-hop LSRs exist, the outgoing label for each path is usually different, because the next-hop LSRs assign labels independently.


If a prefix is reachable via a mix of labeled and unlabeled (IP) paths, Cisco IOS does not consider the unlabeled paths for load-balancing labeled packets.That is because in some cases, the traffic going over the  unlabeled path does not reach its destination. In the case of plain IPv4-over-MPLS (MPLS running on an IPv4 network), the packets reach the destination even if they become unlabeled.

At the place where the packets become unlabeled, an IP lookup has to occur. Because the network is running IPv4 everywhere, it should be able to deliver the packet to its destination without a label.However, in some scenarios, as with MPLS VPN or Any Transport over MPLS (AToM) , a packet that becomes unlabeled in the MPLS network at a certain link does not make it to its final destination.

In MPLS VPN, the MPLS payload is an IPv4 packet, but the P routers do not normally have the VPN routing tables, so they cannot route the packet to its destination.

In the case of AToM, the MPLS payload is a Layer 2 frame; therefore, if the packet loses its label stack on a P router, the P router does not have the Layer 2 forwarding tables present to forward the frame further. This is why in an MPLS network labeled packets are not load-balanced over IP and a Labeled path.

Unknown Label.

It is possible for something to go wrong in the MPLS network and the LSR to start receiving labeled packets with a top label that the lSR does not find in its LFIB. The LSR can theoretically try two things: strip off the labels and try to forward the packet, or drop the packet. The Cisco LSR drops the packet.

Reserved Labels

Labels 0 through 15 are reserved labels. An LSR cannot use them in the normal case for forwarding packets.

Label 0 is the explicit NULL label, whereas label 3 is the implicit NULL label. Label 1 is the router alert label, whereas label 14 is the OAM alert label. The other reserved labels between 0 and 15 have not been assigned yet.

Implicit NULL label.

The implicit NULL label is the label that has a value of 3. An egress LSR assigns the implicit NULL label to a FEC if it does not want to assign a label to that FEC, thus requesting the upstream LSR to perform a pop operation.

In the case of a plain IPv4-over-MPLS network, such as an IPv4 network in which LDP distributes labels between the LSRs, the egress LSR Running Cisco IOS assigns the implicit NULL label to its connected and summarized prefixes.The benefit of this is that if the egress LSR were to assign a label for these FECs, it would receive the packets with one label on top of it. It would then have to do two lookups. First, it would have to look up the label in the LFIB, just to figure out that the label needs to be removed; then it would have to perform an IP lookup. THese are two lookups, and the first is unnecessary.

The solution for this double lookup is to have the egress LSR signal the last but one (or penultimate) LSR in the label switched path (LSP) to send the packet without a label. The egress LSR signals the penultimate LSR to use implicit NULL by not sending a regular label, but by sending the special label with value 3. The result is that the egress LSR receives an IP packet and only needs to perform an IP lookup to be able to forward the packet. This enhances the performance on the egress LSR.

The use of implicit NULL at the end of an LSP is called penultimate hop popping (PHP).

The LFIB entry for the LSP on the PHP router shows a "Pop Label" as the ooutgoing label.

Explicit NULL label.

The use of implicit NULL has one downside: The packet is forwarded with one label less than it was received by the penultimate LSR or unlabeled if it was received with only one label.Besides the label value, the label also holds the Experimental (EXP) bits. When a label is removed, the EXP bits are also removed. Because EXP bits are exclusively used for QoS, the QoS part of the packet is lost when the top label is removed.

The explicit NULL label is the solution to this problem, because the egress LSR signals the IPv4 explicit NULL label (value 0) to the penultimate hop router. the egress LSR then recives labeled packets with a label of value 0 as the top label. The LSR cannot forward the packet by looking up the value 0 in the LFIB because it can be assigned to multiple FECs. The LSR just removes the explicit NULL label. After the LSR removes the explicit NULL label, another lookup has to occur, but the advantage is that the router can derive the QoS information of the recieved packet by looking at the EXP bits of the explicit NULL label.

You can copy the EXP bits value to the precedence or DiffServ bits when performing PHP and this preserve the QoS information. Or, if the label stack has multiple labels and the top label is popped off, you can copy the EXP bits value to the EXP fiield of the new top label.

Router Alert Label.

The router Alert label is the one with value 1. This label can be present anywhere in the label stack except at the bottom. When the router alert label is the top label, it alerts the LSR that the packet needs a closer look. Therefore, the packet is not forwarded in hardware, but it is looked at by a software proccess.

OAM Alert label

The label with value 14 is the Operational and Maintenance (OAM) Alert label as described by the ITU-T Recommendation Y.1711 and RFC 3429. OAM is basically used for failure detection, localization, and performance monitoring. This label differentiates OAM packets from normal user data packets. Cisco IOS does not use label 14. It does performs MPLS OAM, but not by using label 14.

Unreserved Labels

Except for the reserved labels of O through 15, you can use all the label values for normal packet forwarding.

In Cisco IOS, the default range is 16 through 100,000.

You can change the label range with the mpls label range min max command.

TTL Behaviour of Labeled Packets.

With the introduction of MPLS, labels are added to IP packets. this calls for a mechanism in which the TTL is propagated from the IP header into the label stack and vice versa.

TTL behavior in the Case of IP-to-Label or Label-to-IP.

When an IP packet enters the MPLS cloud such as on the ingress LSR the IP TTL value is copied (after being decremented by 1) to the MPLS TTL values of pushed label.At the egress LSR, the label is removed , and the IP header is exposed again. The IP TTL value is copied from the MPLS TTL value in the recieved top label after decrementing it by 1.

In Cisco IOS, however, a safeguard guards against possible routing loops by not copying the MPLS TTL to the IP TTL if the MPLS TTL is greater than the IP TTL of the received labeled packet.


TTL behavior in the Case of Label-to-Label.


  • If the operation that is performed on the labeled packet is a swap, the TTL of incoming label -1 is copied to the swapped label.
  • If the operation that is performed on the labeled packet is to push one or more labels, the received MPLS TTL of the top label -1 is copied to the swapped label and all pushed labels.
  • If the operation is pop, the TTL of the incoming label -1 is copied to the newly exposed label unless that value is greater than the TTL of the newly exposed label, in which case the copy does not happen.


The intermeidate LSR does not change the TTL field in underlying labels or the TTL field in the IP header. An LSR only looks at or only changes the top label in the label stack of a packet.

TTL Expiration

When a labeled packet is received with a TTL of 1, the receiving LSR drops the packet and sends an ICMP message "time exceeded" (type 11, code 0) to the originator of the IP packet. However, the ICMP message is not immediately sent back to the originator of the packet because an interim LSR might not have an IP path toward the source of the packet. The ICMP message is forwarded along the LSP the original packet was following.

The reason for this forwarding of the ICMP message along the LSP that the original packet with the expiring TTL was following is that in some cases the LSR that is generating the ICMP message has no knowledge of how to reach the originator of the original packet.

It is important the the p router (LSR) where the TTL expires notes what the MPLS payload is. The P router checks whether the payload is an IPv4 (or IPv6) packet. If it is,  it can generate the ICMP "time exceeded message and forward it along the LSP. However, if the payload is not an IPv4 (IPv6) packet, the P router cannot generate the ICMP message. Therefore, the P router drops the packet in all cases, exept if it is an IPv4(IPv6) packet. A case in which the LSR drops a packet with the TTL expiring is AToM.

MPLS MTU.

Data links in MPLS networks have a specific MTU, but for labeled packets. Take the case of an IPv4 network implementing MPLS. All IPv4 packets have on or more labels. This does imply that the labeled pakcets are slightly bigger than the IP packets, because for every label, four bytes are added to the packet. So, if n is the number of labels, n*4 bytes are added to the size of the packet when the packet is labeled.

MPLS MTU Command.

The interface MTU command in Cisco IOS specifies how big a Layer 3 packet can be without having to fragment it when sending it on a data link.

Cisco IOS has the mpls mtu command that lets you specify how big a labeled packet can be on a data link. If , for example, you know that all packets that are sent on the link have a maximum of two labels and the MTU is 1500 bytes, you can set the MPLS MTU to 1508 (1500 + 2*4). Thus, all labeled packets of size 1508 bytes (labels included) can be sent on the link without fragmenting them. The default MTU value of a link equals the MTU value.

Giant and Baby Giant Frames.

When a packet becomes labeled, the size increases slightly. If the IP packet was already at the maximum size possible for a certain data link (FUll MTU), it becomes too big to be sent on that data link becuse of the added labels. Therefore, the frame at Layer 2 becomes a giant frame. Because the frame is only slightly bigger than the maximum allowed, it is called a baby giant frame.

MPLS Maximum Receive Unit.

Maximum receive unit (MRU) is a parameter that Cisco IOS uses. It informs the LSR how big a received labeled packet of a certain FEC can be that can still be forwarded out of this LSR without fragmenting it.This value is actually a value per FEC (or prefix) and not just per interface. The reason for this is that labels can be added to or removed from a packet on an LSR.

The label operation plays a role in determining the MRU. Because the label operation is determined per FEC or prefix, the MRU can change per FEC or prefix.

Fragmentation of MPLS packets.

If an LSR receives a labeled packet that is too big to be sent out on a data link, the packet should be fragmented. This is similar to fragmenting an IP packet. If a labeled packet is received and the LSR notices that the outgoing MTU is not big enough for this packet, the LSR strips off the label stack, fragments the IP packet,puts the label stack (after the pop,swap,or push operation) onto all fragments, and forwards the fragments. Only if the IP header has the Don`t Fragment (DF) bit set does the LSR not fragment the IP packet, but it drops the packet and returns an ICMP error message "Fragmentation needed and do not fragment bit set" (ICMP type 3, code 4). to the originator of the IP packet.

Path MTU Discovery

A Method to avoid Fragmentation , which most modern IP hosts perform automatically. In That case, the IP packets sent out have the "Don`t Fragment" (DF) bit set. When a packet encounters a router that cannot forward the packet without fragmenting it, the router notices tha the DF bit is set, drops the packet, and sends an ICMP error message (ICMP type 3, code 4) to the originator of the IP packet. THe originator of the IP packet then lowers the size of the packet and retransmits the packet. if a problem still exists, the host can lower the size of the paket again. This continues until no ICMP message is received for the IP packet. The size of the last IP packet successfully sent is then used as maximum packet size for all subsequent IP traffic between the specifics source and destination; hence, it is the MTU of the path.




Sunday, September 16, 2012

MPLS Fundamentals Review (Chapter II).


Chapter II: MPLS Architecture

MPLS Labels

One MPLS label is a field of 32 bits with a certain structure.

(MPLS label figure).

The first 20 bits are the label value. This value can be between 0 and 2^20-1, or , 1,048,575. However , the first 16 values are exempted from normal use; that is , they have a special meaning.

The bits 20 to 22 are the three experimental (EXP) bits. These bits are used solely for quality of service (QoS).

Bit 23 is the Bottom of Stack (BoS). It is 0, unless this is the bottom label in the stack. If so the BoS bit is set to 1. THe stack can consist of just one label, or it might have more. The number of labels (that is, the 32-bit field) that you can find in the stack is limitless.

Bits 24 to 31 are the eight bit used for Time To Live (TTL). This TTL has the same function as the TTL found in the IP header.

Label Stacking

MPLS-Capable routers might need more than one label on top of the packet to route that packet through the MPLS network. This is done by packing the labels into a stack. The first label in the stack is called the top label, and the  last label is called the bottom label. In between , you can have any number of labels.

Some MPLS applications actually need more than one label in the label stack to forward the labeled packets. Two examples of such MPLS applications are MPLS VPN and AToM.

Encoding of MPLS

The label stack sits in front of the Layer 3 packet that is, before the header of the transported protocol, but after the Layer 2 header. Often, the MPLS label stack is called the shim header because of its placement.



Assuming that the transported protocol is IPv4, and the encapsulation of a link is PPP, the label stack is present after the PPP header but before the IPv4 header.

Because the label stack in the Layer 2 frame is placed before the Layer 3 header or other transported protocol, you must have new values for the Data link Layer protocol Field, indicating that what follows the Layer 2 header is an MPLS labeled packet.

The data link layer protocol field is a value indication what payload type the layer 2 frame is carrying.

MPLS Protocol identifier values for layer 2 encapsulation types.


  • PPP  - PPP protocol field - 0281
  • Ethernet/802.3 LLC/SNAP encapsulation - Ethertype Value 8847
  • HDCL - Protocol - 8847
  • Frame Relay - NLPID (Network level protocol ID) - 80


MPLS and the OSI reference Model.

MPLS is not a Layer 2 protocol because the Layer 2 encapsulation is still present with labeled packets. MPLS also is not really a Layer 3 protocol because the Layer 3 protocol is still present. Therefore, MPLS does not fit in the OSI layering too well. The easiest thing to do is to view MPLS as the 2.5 layer and be done with it.

Label Switch Router.

A label switch router (LSR) is a router that supports MPLS. It is capable of understanding MPLS labels and of receiving and transmitting a labeled packet on a data link.

Three kinds of LSRs exist in an MPLS network:


  • Ingress LSRs : Ingress LSRs receive a packet that is not labeled yet, insert a label (stack) in front of the packet, and send it on a data link.
  • Egress LSRs: Egress LSRs receive labeled packets, remove the label(s) , and send them on a data link . Ingress and egress LSRs are edge LSRs.
  • Intermediate LSRs: Intermidiate LSRs receive an incoming labeled packet, perform an operation on it, switch the packet, and send the packet on the correct data link.


An LSR can do the three operations : pop,push, or swap.

It must be able to pop one or more labels (remove one or more labels from the top of the label stack) before switching the packet out.

An LSR must also be able to push one or more labels onto the received packet. If the received packet is already labeled, the LSR pushes one or more labels onto the label stack and switches out the packet. If the packet is not labeled yet, the LSR creates a label stack and pushes it onto the packet.

An LSR must also be able to swap a label. This simply means that when a labeled packet is received, the top label of the label stack is swapped with a new label and the packet is switched on the outgoing data link.

An LSR that pushes labels onto a packet that was not labeled yet is called imposing LSR because it is the first LSR to impose labels onto the packet.

An LSR that removes all labels form the labeled packet before switching out the pakcet is a disposing LSR.

Label Switched Path

A label switched path (LSP) is a sequence of LSRs that switch a labeled packet through an MPLS network or part of an MPLS network.

The first LSR of an LSP is the ingress LSR for that LSP, whereas the last LSR of the LSP is the egress LSR. All the LSRs in between the ingress and egress LSRs are the intermediate LSRs.

The ingress LSR of an LSP is not necessarily the first router to label the packet. THe packet might have already been labeled by a preceding LSR. Such case would be a nested LSP, that is an LSP inside another LSP. A backup traffic engineering (TE) tunnel is an example of such a nested LSP.

Forwarding Equivalence Class.

A forwarding Equivalence Class (FEC) is a group or flow of packets that are forwarded along the same path and are treated the same with regard to the forwarding treatment. All packets belonging to the same FEC have the same label. However , not all packets that have the same label belong to the same FEC, because their EXP values might differ; the forwarding treatment could be different, and they culd belong to a different FEC.

The router that decides which packets belong to which FEC is the ingress LSR.

FECs examples:


  • Packets with Layer 3 destination IP addresses matching a certain prefix.
  • Multicast packets belonging to a certain group.
  • Packet with the same forwarding treatment, Based on the precedence or IP DiffServ Code Point (DSCP) field.
  • Layer 2 frames carried across an MPLS  network received on one VC or (Sub) interface on the ingress LSR and transmitted on one VC or (Sub)interface on the egress LSR.
  • Packets with Layer 3 destination IP addresses that belong to a set of BGP prefixes, all with the same BGP next hop.


Label Distribution

You need a mechanism to tell the routers which labels to use when forwarding a packet. Labels are local to each pair of adjacent routers. Labels have no global meaning across the network. For adjacent routers to agree which label to use for which prefix, they need some form of communication between them; otherwise, the routers do not know which outgoing label need to match which incoming label. A label distribution protocol is needed.

You can distribute labels in two ways:


  • Piggyback the labels on an existing IP routing protocol.
  • Have a separate protocol distribute labels.


Piggyback the labels on an Existing IP routing Protocol.

The big advantage of having the routing protocol carry the labels is that the routing and label distribution are always in sync, which means that you cannot have a label if the prefix is missing or vice versa. It also eliminates the need of another protocol running on the LSR to do the label distribution.

The implementation for distance vector routing protocols (Such as EIGRP) is straightforward , because each router originates a prefix from its routing table. The router then just binds a label to that prefix.

Link state routing protocols (IS-IS and OSPF) do not function in this way. Each router originates link state updates that are then foprwardied unchanged by all routers inside one area. The problem is that for MPLS to work, each router needs to distribute a label for each IGP prefix even the routers that are not originators of that prefix. Link state routing protocols need to be enhanced in an intrisuve way to be able to do this. Therefore, for link state routing protocols, a separate protocol is preferred to distribute the labels.

BGP is a routing protocol that can carry prefices and distribute labels at the same time. BGP is used primarily for label distribution in MPLS VPN networks.

Running a Separate Protocol for Label Distribution.

Has the advantage of being routing protocol independent. the disadvantage of this mehotd is that a new protocol is needed on the LSRs.

The choice of all router vendors was to have a new label distribution protocol distribute the labels for IGP prefixes. this is label distribution protocol (LDP).

Several varieties of protocols distribute labels:


  • Tag Distribution Protocol (TDP).
  • Label Distribution Protocol (LDP).
  • Resource Reservation Protocol (RSVP).


TDP, which predates LDP , was the first protocol for label distribution developed and implemented by Cisco. LDP and TDP are similar in the way they operate , but LDP has more functionality than TDP. 

Label distribution by RSVP is used for MPLS TE only.

Label Distribution With LDP

For every IGP IP prefix in its IP routing table, each LSR creates a local binding. That is, it binds a label to the IPv4 prefix.The LSR then distributes this binding to all its LDP neighbors.These received bindings become remote bindings.

The neighbors then store these remote and local bindings in a special table, the label information base (LIB).

Each LSR has only one local binding per prefix, at least when the label space is per platform.

If the label space is per interface, one local label binding can exist per prefix per interface. Therefore, you can have one label per prefix or one label per prefix per interface, but the LSR gets more than one remote bingin because it usually has more than one adjacent LSR.

Out of all the remote bindings for one prefix, the LSR needs to pick only one and use that one to determine the outgoing label for that IP prefix. The routing table determines what the next hop of the IPv4 prefix is. THe LSR chooses the remote binding received from the downstream LSR, which is the next hop in the routing table for that prefix. It uses this information to set up its label forwarding information base (LFIB). where the label from the local binding serves as the incoming label and the label from the one remote binding chosen via the routing table serves as the outgoing label.Therefore, when an LSR receives a labeled packet it is now capable of swapping the incoming label it assigned , with the outgoing label assigned by the adjacent next-hop LSR.

Label Forwarding Instance Base

The LFIB is the table used to forward labeled packets.It is populated with the incoming and outgoing labels for the LSPs.

MPLS Payload

The MPLS label has no Network Level Protocol Identifier field. This field is present in all Layer 2 frames to indicate what the Layer 3 protocol is.

Intermediate LSRs do not need to know what the MPLS payload is because all the information eeded to switch the packet is known by looking at the top label only. For the forwarding based on the top abel to be correct, the intermediate LSR must have a local and remote binding for the top label.

An Egress LSR that is removing all labels on top of the packet must know what the MPLS payload is, because it must forward the MPLS payload further on.

That egress LSR is the one that made the local binding, which means that that LSR assigned a local label to that FEC, and it is taht label that is used as an incoming label on the packet. Therefore, the egress LSR knows what the MPLS payload is by looking at the label, because it is the egress LSR that created the label binding for that FEC, and i knows what that FEC is.

MPLS label spaces


  • If per interface label is used, the packet is not forwarded solely based on the label, but based on both the incoming interface and the label.



  • The other possibility is that the label is not unique per interface, but over the LSR assigning the label. This is called per-platform label space.



  • If per-platform label space is used, the packet is forwarded solely based on the label, independently from the incoming interface.


In cisco IOS, all Label Switching Controlled-ATM (LC-ATM) interfaces have a per-interface label space, whereas all ATM frame-based and non-ATM interfaces have a per-platform label space.

Different MPLS modes.


  • Label Distribution mode
  • Label retention mode
  • LSP control Mode.


Label Distribution Modes.

The MPLS architecture has two modes to distribute label bindings


  • Downstream-on-Demand (DoD) label distribution mode.
  • Unsolicited Downstream (UD) label distribution mode.


In the DoD mode, each LSR request its next-hop (that is, downstream) LSR on an LSP, a label binding for that FEC. Each LSR receives one binding per FEC only from its downstream LSR on that FEC.

In the UD mode, each LSR distributes a binding to its adjacent LSRs, without those LSRs requesting a label. In the UD mode, an LSR receives a remote label binding drom each adjacent LSR.

In Cisco IOS, all interfaces except LC-ATM interfaces use the UD label distribution mode. all LC-ATM use the DoD label distribution mode.

Label Retention Modes


  • Liberal Label Retention (LLR) mode.
  • Conservative Label Retention (CLR) mode.


In LLR mode, an LSR keeps all received remote bindings in the LIB. One of these bindings is the remote binding received from the downstream or next hop for that FEC. The label from that remote binding is used in the LFIB, but none of the labels from the other remote bindings are put int he LFIB; therefore, not all are used to forward packets.

At any time, the routing topology can change, for example due to a link going down or a router being removed. the next hop router for a particular FEC can change. At that time, the label for the new next-hop router is already n the LIB and the LFIB can be quickly updated with the new outgoing label.

In CLR mode an LSR that is running this mode does not store all remote bindings in the LIB, but it stores only the remore bindings that is associated with the next-hop LSR for a particular FEC.

LLR mode gives you quicker adaptation to routing changes, whereas CLR mode gives you fewer labels to store and a better usage of the available memory on the router.

In cisco IOS , the retention mode for LC-ATM interfaces is the CLR mode. LLR mode for all other types of interfaces.

LSP control Modes


  • Independent LSP control mode
  • Ordered LSP control Mode.


The LSR can create a local binding for a FEC independently from the other LSRs. This is called Independent LSP control Mode. In this control mode, each LSR creates a local binding for a particular FEC as soon as it recognizes the FEC. Usually, this means that the prefix for the FEC is in its routing table.

In ordered LSP Control Mode, an LSR only creates a local binding for a FEC if it recognizes that it is the egress LSR for the FEC or if the LSR has received a label binding from the next hop for this FEC.

Cisco IOS uses Independent LSP control mode. ATM switches runn ing IOS use Ordered LSP control mode by default.