491
[PE2-bgp] peer 3.3.3.9 connect-interface loopback 0
[PE2-bgp] address-family vpnv4
[PE2-bgp-vpnv4] peer 3.3.3.9 enable
[PE2-bgp-vpnv4] quit
[PE2-bgp] quit
5.
The default MTU value varies by interface type. To avoid packet fragmentation, set the MTU
value for each POS interface on each device to 1500 bytes. The following shows the MTU
configuration on PE 1.
[PE1] interface pos 1/1/0
[PE1-Pos1/1/0] mtu 1500
[PE1-Pos1/1/0] shutdown
[PE1-Pos1/1/0] undo shutdown
Verifying the configuration
# Ping CE 2 from CE 1 to verify their connectivity.
<CE1> ping 100.2.1.2
Ping 100.2.1.2 (100.2.1.2): 56 data bytes, press CTRL_C to break
56 bytes from 100.2.1.2: icmp_seq=0 ttl=128 time=1.073 ms
56 bytes from 100.2.1.2: icmp_seq=1 ttl=128 time=1.428 ms
56 bytes from 100.2.1.2: icmp_seq=2 ttl=128 time=19.367 ms
56 bytes from 100.2.1.2: icmp_seq=3 ttl=128 time=1.013 ms
56 bytes from 100.2.1.2: icmp_seq=4 ttl=128 time=0.684 ms
--- Ping statistics for 100.2.1.2 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.684/4.713/19.367/7.331 ms
Access to IP backbone through an LDP VPLS
Network requirements
Create an LDP PW between PE 1 and PE-agg on the VPLS access network, so that CE 1 can
access the IP backbone through the PW.
Configure OSPF process 2 to advertise routing information on the IP backbone.
Figure 132 Network diagram