Features of PBB-TE Architecture and GMPLS Control Technology

Release Date:2010-09-13 Author:Wei Jianwen, Xie Rui, Jin Yaohui Click:

 

This work was funded by the National Basic Research Program of China (“973” Program) under Grant No. 2010CB328205, the National Natural Science Foundation of China under Grant No. 60825103, and the National Key Technology R&D Program under Grant No. 2008BAH37B03.

 

 

    Backbone Bridge Traffic Engineering (PBB-TE) is a  connection-oriented packet transport technology with good scalability and end-to-end QoS support. It contains an Operation, Administration and Maintenance (OAM) mechanism on the data plane. This enhances the reliability and manageability of telecom networks. PBB-TE is expected to be the preferred solution to metro Packet Transport Networks (PTNs).


1 Features of PBB-TE Architecture

 

1.1 Evolution of IEEE Ethernet

 

1.1.1 802.1Q Virtual Local Area Network (VLAN)
In IEEE 802.1Q[1], a Customer Virtual LAN Tag (C-Tag) domain based on the frame structure of 802.1 Ethernet is introduced. C-Tag contains a 12-bit Customer Virtual LAN ID (C-VID) and a 3-bit Customer Product ID (C-PID). The C-VID indicates which VLAN the source host belongs to, and C-PID shows the service type of a frame. In a 802.1Q system, physical networks can support up to 4,096 VLANs, and traffic in different VLANs is separated. Depending on the service type indicated by C-PID, the 802.1Q network bridge can offer differentiated services.

 

1.1.2 802.1ad Provider Bridge (PB)
IEEE 802.1ad PB[2] is the first provider-oriented Ethernet bridge technology. In PB, a Service VLAN Tag (S-Tag) domain is added to the 802.1Q frame structure for service providers. This domain contains 12-bit provider VLAN identifiers (S-VID) and a 3-bit C-PID. An IEEE 802.1ad bridge network is called Provider Bridge Network (PBN). As shown in Figure 1, an S-Tag is either assigned or removed at the entry node of a PBN. The S-Tag separates provider VLAN from customer VLAN, and allows multiple customer VLAN services to be transported through the same provider VLAN.

 


    Limited by the length of the S-VID, PBN supports up to 4,096 services. It forwards frames in the format <C-DA+S-VID>, and learns customer Media Access Control (MAC) addresses. Each PBN node therefore maintains a large forwarding table. With these limitations, PBN fails to meet the scalability requirements of telecom networks.

 

1.1.3 802.1ah Provider Backbone Bridge (PBB)
In IEEE 802.1ah PBB[3] , a Provider Backbone Bridge Network (PBBN) based on PBN is established to  improve PB scalability. A PBB frame has a provider frame header <B-DA, B-SA, B-TAG, I-TAG>, which is absent from the PB frame. The PBBN edge node is responsible for adding and deleting the provider frame header. In the header, B-DA and B-SA are the MAC addresses of the PBBN entry and exit nodes; the B-TAG containing a 12-bit B-VID indicates a PBBN spanning tree or a transport path; and the I-Tag with 24-bit I-SID represents the number of services. A PBBN can support up to 16 million services. The PBBN core node forwards frames in the format <B-DA+B-VID>, and only its edge nodes are required to learn customer MAC addresses. Therefore, the forwarding table of the core node is greatly reduced.


    In terms of the number of services and nodes supported, PBB is the first bridge technology to meet the requirements of telecom networks. However, PBB still lacks Traffic Engineering (TE) and OAM features.

 

1.2 Features of 802.1Qay PBB-TE
A product of PBB and telecom network features, IEEE 802.1Qay PBB-TE[4] is a type of connection-oriented packet transport technology. PBB-TE network architecture, as shown in Figure 2, has the following features:

 


    (1) Scalability
    On the data plane, PBB-TE has the same MAC-in-MAC frame structure as PBB. The core node of a PBB-TE network forwards frames in the format <B-DA, B-VID>. PBB-TE inherits the strengths of PBB in supporting a large number of services and separating provider and customer addresses.


    (2) Connection-Orientation and QoS Guarantee
    PBB-TE discards the spanning tree protocol and source address learning mechanism, as well as the frames of unknown addresses. Ethernet Switched Path (ESP) is used for transport services and should be established by the control plane or management system. PBB-TE is therefore connection-oriented with each ESP having definite TE attributes and QoS guarantee.


    (3) OAM
    Using a Connectivity Fault Management (CFM)[5] OAM mechanism, PBB-TE can provide carrier-class OAM without assistance from other layers.


    (4) End-to-End Path Protection
    PBB-TE provides point-to-point and point-to-multipoint ESP with 1:1 path protection. A protection path can be built while simultaneously building a working path, and by pre-configuring the protection path, QoS identical to the working path can be achieved. PBB-TE path fault diagnosis and protection triggering are all completed on the data plane, and protection switching time can reach 50 ms.


    (5) Multi-Service Bearing
    PBB-TE can bear L2 and L3 services, and also supports TDM services. However, compared to Transport Multiprotocol Label Switching (T-MPLS), PBB-TE has weaker Multipoint-to-Multipoint (MP2MP) service support, and its QoS classification and control plane technology are not developed enough. These weaknesses are expected to be solved during the PBB-TE standardization process. PBB-TE reduces MAN maintenance costs in the long run, and some operators have already tried deploying PBB-TE systems[6].

 

1.3 PBB-TE Equipment Interconnection Test in Shanghai Jiao Tong University Backbone Network
A cost-effective carrier-class packet transport technology, PBB-TE can be applied not only in simple MANs, but also in complex scenarios (such as data centers or R&D backbone networks with intensive services and a large number of nodes). The Network Center of Shanghai Jiao Tong University has attempted to apply PBB-TE into the campus backbone network, and the success of this test will verify the interoperability of PBB-TE and MPLS equipment.


    In the campus backbone network shown in Figure 3, the core network consists of a series of IP/MPLS routers interconnected by 10GE or GE routers. The convergence network consists of IP routers. The campus network offers services such as on-demand and multicast intra-campus video, email, File Transfer Protocol (FTP), and P2P file sharing. To provide better service, the network uses MPLS-TE technology in a number of areas. A specific service-oriented MPLS Virtual Private Network (VPN) has also been created to provide university administration with a reliable and confidential network platform.

 


    In the test, the customer MAC frame was encapsulated as the MAC-in-MAC frame after the bridge convergence of PBB-TE. The frame was then encapsulated as an MPLS packet at the MPLS edge node, and transported by the MPLS core network. The frame arrives at the host through the MPLS edge router and PBB-TE bridge at the entry of the MPLS core network. The PBB-TE bridge is a PBB-TE edge bridge with convergence capability.


    The test will be deemed successful when a PBB-TE convergence network has been established. The network center intends to evaluate the feasibility of PBB-TE in the campus network by comparing the PBB-TE convergence network with the existing IP convergence network. A study of the PBB-TE control plane will also be conducted on equipment connected to MPLS switches.


2 GMPLS Controlled PBB-TE
Although PBB-TE control plane standards are still being studied, industry has generally agreed to use GMPLS as the PBB-TE control plane technology. GMPLS extends the meaning of label and label exchange in MPLS, and reuses part of the MPLS protocol. GMPLS functions include signaling, routing, path selection, and link management. With extensions, GMPLS may support data planes such as SONET/SDH, Optical Transport Network (OTN), and Wavelength Division Multiplexing (WDM).

 

2.1 GMPLS Controlled of Ethernet Label Switching (GELS)
IETF is a leading promoter of standardization of GMPLS-controlled PBB-TE. While standardization is far from complete, two GELS drafts have been released[7]. These drafts involve GELS architecture and technical specifications. GELS uses as many of the original GMPL components as possible, and makes necessary extensions.


    (1) Addressing Mode
    The node on the GELS control plane still uses the IP address ID, and control plane messages are exchanged on the IP layer. GELS supports both labeled and unlabeled ports.


    (2) Signaling Protocol
    A new label format <B-DA, B-VID> is added in GELS, which corresponds to the forwarding table entry of the PBB-TE node. On the data plane, different nodes on the same path correspond to the same forwarding table entry; therefore, the nodes should be assigned the same label. The PBB-TE label is global for the whole network.


    (3) Traffic Parameters
    GELS uses four bandwidth parameters[8]: Committed Information Rate (CIR), Committed Burst Size (CBS), Excess Information Rate (EIR), and Excess Burst Size (EBS).


    (4) Route and Path Computation
    GMPLS does not limit path selection methods. Therefore, it allows computation and selection of any path. Open Shortest Path First with Traffic Engineering Extensions (OSPF-TE) and Intermediate System to Intermediate System Extensions for Traffic Engineering (IS-IS-TE) can still be used to release routing messages of the PBB-TE data plane. Because of labeled or unlabeled data plane ports, routing messages need not carry MAC addresses of ports.


    (5) Link Management
    GMPLS Link Management Protocol (LMP) and PBB-TE CFM have some overlapping functions. For example, both can implement neighbor discovery, fault diagnosis, fault confirmation, and fault positioning. CFM can work independently without support from other layers, while LMP can allocate numbered/unnumbered interface IDs automatically. CFM and LMP can therefore run together. The two IETF drafts only specify how GMPLS is used to build point-to-point PBB-TE paths. Improvements are required in the specifications of point-to-multipoint path establishment and control-plane-based protection recovery. 802.1aq Provider Link State Bridging (PLSB) is also a solution for PBB-TE control[9].

 

2.2 Simulation Platform of GMPLS Controlled PBB-TE
Researchers are likely to use a virtual PBB-TE testing platform rather than a platform built with real PBB-TE equipment because a virtual platform is more flexible and supports more nodes. Such virtual platforms are of two types:


    (1) Emulation 
    An emulation platform is represented by Finite State Machine (FSM) emulation software such as Network Simulator 2 (NS2). It supports a large number of nodes and has good scalability; however, it lacks details of signaling interaction, and fidelity of the control and data planes is not good enough.


    (2) Simulation 
    The simulation platform is represented by the DRAGON program[10], and uses computers to replace PBB-TE bridges. The complete GMPLS protocol stack is run in the computers, and data frames are sent through the network cards. Therefore, the PBB-TE control and data planes can be realistically simulated.  Scalability is limited because one PBB-TE bridge requires one computer for simulation.


    This paper focuses on signaling interworking and cross-layer optimization of GMPLS, and so a compromise is made between emulation and simulation. As shown in Figure 4, the large-scale optical network validation platform simulates a 2-layer network, with PBB-TE on the upper layer and SONET/SDH on the bottom layer. Resource Reservation Protocol-Traffic Engineering (RSVP-TE) and OSPF-TE are implemented completely on the platform, as are the GMPLS extensions to PBB-TE and SONET. The platform does not implement the forwarding function on the data plane. A node on the platform is only an object in the computer’s memory; signaling interaction between nodes as well as routing message updates are implemented by communications between objects, without the need for actual network cards. Both the signaling and routing messages are recorded in logs for offline reading. The platform is intended to successfully demonstrate cross-layer path establishment where tens of nodes are involved. GMPLS extensions are to be implemented on the platform for support of PBB-TE protection switching.

 


3 Conclusion
PBB-TE is a PTN technology with layered architecture, improved OAM, and QoS guarantee. As a convergence-layer solution, PBB-TE is more cost effective than MPLS. PBB-TE and GMPLS standards are still in progress, and more telecom network characteristics will be introduced into the PBB-TE system.  With improved standards, PBB-TE is expected to become an optimal solution for next-generation metro PTNs. 

 

References
[1] IEEE 802.1Q. IEEE standards for local and metropolitan area networks: virtual bridged local area networks [S]. 2003.
[2] IEEE 802.1ad. IEEE standards for local and metropolitan area networks: virtual bridged local area networks, amendment 4: provider bridge [S]. 2005.
[3] IEEE 802.1ah. IEEE standards for local and metropolitan area networks: virtual bridged local area networks, amendment 6: provider backbone bridge [S]. 2005.
[4] IEEE 802.1Qay. IEEE standards for local and metropolitan area networks: virtual bridged local area networks, amendment 7: provider backbone bridge traffic engineering [S]. 2009.
[5] IEEE 802.1Qaw. IEEE standard for local and metropolitan area networks: virtual bridged local area networks, amendment 9: management of data driven and data dependent connectivity faults [S]. 2009.
[6] Deutsche telecom flirts with PBT [EB/OL]. [2007-09-20]. http://www.lightreading.com/document.asp?doc_id=134344.
[7] Generalized Multi-Protocol Label Switching (GMPLS) Ethernet label switching architecture and framework [R]. draft-ietf-ccamp-gmpls-ethernet-arch-09. 2010.
[8] Ethernet traffic parameters [R]. draft-ietf-ccamp-ethernet-traffic-parameters-10. 2010.
[9] ALLAN D, ASHWOOD-SMITH P, BRAGG N, et al. Provider link state bridging [J]. IEEE Communications Magazine, 2008, 46(9): 110-117.
[10] SOBIESKI J. DRAGON: Dynamic resource allocation via GMPLS optical networks [C]//MCNC Optical Control Planes Workshop, Apr 23, 2004, Chicago, IL, USA.

 

 

[Abstract] Time Division Multiplexing (TDM) transport networks are evolving to packet-oriented, and a variety of carrier-class packet transport technologies have emerged. Provider Backbone Bridge with Traffic Engineering (PBB-TE) is a connection-oriented packet transport technology that provides good scalability and manageability, and guarantees Quality of Service (QoS). Generalized Multi-Protocol Label Switching (GMPLS) is a mature transport network control plane technology that supports multiple data planes with different switching granularity. GMPLS-controlled PBB-TE is a promising solution for Packet Transport Networks (PTN).