As a new wave of requirements are emerging in the telecommunications industry, equipment manufacturers are seeking ways to accelerate time to market and reduce development and production costs. They also need to preserve technology investments through successive generations of products. Hardware and software standards are increasingly seen as a way to achieve these objectives so that resources can be focused on product differentiation while deriving maximum business benefit and competitive advantage.
Application-enabling platforms harness the latest hardware and software standards and are an increasingly popular mechanism to achieve these business objectives. Broadband Remote Access Servers (BRAS) are one area where this new approach to development can be applied.
BRAS requirements are outlined in the DSL Forum architecture specification TR59 to support quality of service (QoS)-enabled IP services. BRAS provides more flexible service provisioning because subscriber services can be handled within a single managed network, rather than being provisioned on a one-to-one basis through to the NSP or ASP. See Figure 1.
On the access side, the BRAS provides an aggregation point for a variety of services. These include traditional ATM-based offerings and newer, more native IP-based services, such as support for Point-to-Point Protocol over ATM (PPPoA), PPP over Ethernet (PPPoE) and direct IP services encapsulated over an appropriate Layer 2 transport.
The NSP and ASP connections can support an assortment of high bandwidth connections. At the physical layer this could be: traditional DS1/E1 through to DS3/E3; SONET or SDH capabilities; OC3c/STM1 through to OC48c/STM16; and 10/100/1000 Ethernet (physical layer), for hosting and colocation for example.
Numerous options, including ATM, must be supported at the data link level, Layer 2, to maintain compatibility with existing systems, Ethernet, Packet Over SONET (POS) and Frame Relay.
AGGREGATION AND IP TRANSPORT
The key aggregation functions are performed 'to' the IP transport layer. These can involve simple forwarding of IP traffic, including directing traffic on to IP and network-based Virtual Private Networks (VPNs). They often require termination of PPP sessions, traffic aggregation, multicasting capabilities, Network Address Translation (NAT)-type functionality and authentication and encryption, depending on the subscribed services.
A key aspect for all these functions is provisioning QoS that can be provided through the Internet Engineering Task Force (IETF) Differentiated Services (DiffServ) specifications and the use of the Differentiated Services Code Point (DSCP). This does not dictate an absolute measure of QoS, but rather a relative priority mechanism for traffic. On the access side, particularly on individual subscriber loops, this is almost equivalent to QoS capabilities. These capabilities will initially be supported over IPv4, but must allow for IPv6 support in the future.
As these services are predominantly IP-based, the BRAS must perform basic IP-routed network functions, very similar to those of an edge router. This includes support for Open Shortest Path First (OSPF) and Border Gateway Protocol Type 4 (BGP4), along with traffic engineering functions. As traffic is increasingly aggregated into high speed uplink connections to NSPs and ASPs, Multi-Protocol Label Switching (MPLS) can provide traffic engineering characteristics, and, in conjunction with BGP4, Provider Provisioned VPNS (PP-VPNs).
It should be noted that aggregation can also be done at the PPP layer through the use of a routable Layer 2 protocol such as Layer 2 Tunneling Protocol (L2TP). In this case, the device operates as a L2TP Access Concentrator (LAC).
ADVANCEDTCA AT THE CORE OF BRAS
There are a number of standards-based technologies that can be used to develop a BRAS, to accommodate the range of functionality described above. At the core of these standards-based technologies is the Advanced Telecom Computing Architecture (AdvancedTCA), developed by the PCI Industry Computer Manufacturers Group (PICMG). The PICMG 3.X AdvancedTCA specifications provide a flexible hardware platform definition designed to meet the requirements of the telecommunications industry. Other alternatives include the CompactTCA specification, based in the switched Ethernet PICMG 2.16 CompactPCI specifications.
The base PICMG 3.0 specification takes account of the required power and mechanical specifications, such as multiple -48V power supplies with host swappable fans. These capabilities are managed at a fundamental hardware level by the Intelligent Platform Management Interface (IPMI). The 8U board form factor provides 140 sq/in of space with 200W of power.
When combined with a range of fabric inter-connection parameters, AdvancedTCA provides a powerful framework for a broad range of next-generation telecommunications equipment. Packaging this functionality into 14 slot, 19in and 16 slot 23in 12U/13U form factors with regulatory compliance to NEBS and international equivalent specifications completes the requirements.
From a fabric perspective, AdvancedTCA specifies a number of options. In addition to the low level blade management functions of IPMI, PICMG 3.0 specifies a base fabric dual star interconnection mechanism using Gigabit Ethernet. This assumes that two blades will operate as a traditional redundant pair, performing fabric switching and core shelf control functions.
PICMG DATA FABRIC
While 1Gb/s of bandwidth per slot is more than adequate for control plane and shelf management functions, it is clearly insufficient for a BRAS device requiring multiple OC48 uplinks, and possibly OC192 or 10GE (Gigabit Ethernet) in the future.
To address this requirement, PICMG has defined a data fabric to support a variety of payload capabilities. Like the base fabric, the data fabric can support a dual star configuration, but also has the option of a mesh interconnection, with every blade directly connected to every other blade.
This may be appealing from a bandwidth and throughput perspective, but it complicates control functions and increases cost as every card is now essentially a switch blade. For these reasons, the dual star (base and data) fabric model is generally preferred. The data fabric options range from PICMG 3.1 supporting Gigabit Ethernet and Fibre Channel to Infiniband (PICMG 3.2), to StarFabric (PICMG 3.3), PCI Express/Advanced Switching (PICMG3.4) and RAPID IO (PICMG 3.5). Some of these specifications are capable of delivering 10Gb/s per slot, offering a shelf capable of 240/280 Gb/s of switching capacity.
The fabric specifications are still undergoing standardisation and development because, in addition to raw throughput capabilities, other requirements include internal traffic management, queuing and Head Of Line (HOL) blocking type functionality.
In a packet-based environment, delivering 10Gb/s uplink capabilities at line rates generally requires 2X over-speeding, or 20Gb/s per slot, to maximise the use of statistical multiplexing. However, the developers of the AdvancedTCA specifications had the foresight to provide unused backplane pins to provide support for additional capabilities. This allows developers who wish to use AdvancedTCA capabilities to implement off-the-shelf, or proprietary chipsets, that will support some, or all, of the capabilities outlined above.
The AdvancedTCA specifications provide for both front and rear access connectivity. Rear access is supported via a Rear Transition Module (RTM), which can be removed if not required. Layer 1 interface requirements may be supported on this type of module. Where necessary, specific interface functions can be supported through traditional PCI Mezzanine Cards (PMC), which minimise the development of carrier cards and mezzanines. PICMG is also developing the Advanced Mezzanine Card (AMC) specification which will have 30% more component space and over a 100% more power than existing capabilities. Additionally, AMCs are hot swappable modules and up to eight can be specified per AdvancedTCA blade.
BRAS requires a high degree of flexibility in its I/O capabilities, making it an ideal candidate for network processor technology. Economies of scale can be achieved by using the same card design with appropriate programming.
The major challenges for BRAS are handling functionality at Layer 2 and above, where the traffic management of IP flows becomes critical. Aside from the basic cell and packet handling functions, traffic policing and shaping with appropriate queue management and discard functions are critical. The access side requires traffic management at both the Customer Premise Equipment (CPE) and the BRAS, and, it is assumed, on the BRAS downstream connections to the NSPs and ASPs. See Figure 2.
On the access side, the DSL layers are based on PPPoA today and in future will use PPPoE encapsulation. The ATM VCCs are AAL5 PVCs supporting unspecified Bit Rate (UBR), UBR+ and Variable Bit Rate (VBR-rt) classes of service. In current applications, traffic management is performed purely at ATM layers. As the application layer is predominantly IP, all services receive the same treatment, unless multiple ATM VCs are created and flows mapped accordingly. Each physical connection at the BRAS can contain tens of thousands of subscriber flows.
When traffic management is moved to the IP layer and capabilities such as DiffServ are used, QoS granularity can be moved to individual flows. However, the implication is that the Layer 2 transport, predominantly ATM today, must support sufficient bandwidth and associated characteristics, such as delay and latency, to support IP traffic management effectively. When all IP-based mechanisms are introduced, such as PPPoE, these issues will change. The basic assumption must be that Ethernet provides sufficient bandwidth for all applications and that if multiple PPP sessions are provisioned, their total bandwidth requirements will enable IP QoS management.
On the network side, the NSP and ASP connections are much more highly aggregated. The connections may use traditional IP forwarding mechanisms with traffic management being mapped via DiffServ code points (DSCP). Alternatively, DSCPs can be mapped to MPLS LSPs. In either case, queue management and scheduling are necessary using mechanisms such as Weighted Random Early Discard (WRED). More complex mechanisms may also be required where traffic is to be mapped to VPNs, through the use of mechanisms such as RFC2547bis, which uses MPLS label stacking. Where aggregation occurs at the PPP layer, L2TP is required, encapsulated with IP. Mapping PPP sessions into L2TP can either be through provisioning via a Layer 2 Tunnelling Server (L2TS) or through the use of a RADIUS server.
This sophisticated set of data plane functionality needs a complementary set of control plane functions. The first requirement is to provide an overall carrier-class platform that can meet the requirements of the five 9s type of reliability for the service provider community. The Service Availability Forum (SAF) is focused on this area, developing comprehensive specifications for middleware that will enable portability for software applications between platforms.
The key advantage of standardisation here is that it will enable a selection of higher level applications to be developed that require significantly less integration than with current development methods, which rely on proprietary Application Programming Interfaces (APIs) and often have inconsistent, high-availability models, if they exist at all.
At the lower layer, the Hardware Platform Interface (HPI) specification provides standards-based access mechanisms to fundamental hardware components. In many ways, it is complementary to the IPMI functionality specified for CompactPCI and AdvancedTCA.
Far more interesting, however, is the SAF Application Interface Specification (AIS), which provides a standard set of mechanisms to achieve the service availability levels required by carriers. This specification, to be published soon, includes: support for logical grouping of services, known as cluster management; messaging services for distributed system support; locking services; checkpoint services to enable the replication of information for high availability applications; event services to handle such things as failures; and availability management to provide high availability models and switchover options. Orthogonal to both these standards is a management specification, which is still in development, to provide consistent access to appropriate information.
Platforms based on CompactPCI and AdvancedTCA, with SAF implementations for the core of an application-enabling platform is a standard set of hardware and software services that can now be applied to the BRAS implementation.
A key requirement for most telecommunications devices is scalability, and BRAS are no exception. To meet these needs, the control plane model must have as much distribution as possible, without making the overhead too significant. As services are offered and exposed primarily at the IP layer, the BRAS is a routed device. It demands, as a minimum, an Interior Gateway Protocol (IGP) such as OSPF and possibly an Exterior Gateway Protocol (EGP), BGP4, if the NSP and/or ASP networks are in different domains.
Traffic engineering is also necessary at the IP layer, enabled by opaque Link State Advertisements (LSAs), the information elements exchanged between IGP routing devices. The traffic-engineered topology of a network can be established from this information. Equivalent mechanisms are supported for BGP4 and MPLS, although the former is a different type of protocol (distance vector as opposed to link-state) and the latter is a signalling protocol.
The I/O cards in the BRAS must perform as much processing as possible to achieve scalability. With the AdvancedTCA architecture, the data plane functionality provided by network processors has to be complemented by general purpose processors for control plane functions. Specifically, any Layer 2 and Layer 3 signaling, management and control plane termination functionality must be performed on the I/O cards.
DISTRIBUTED MESSAGING SYSTEM
In the case of IP functionality, the basic keep-alive and routing information exchanges must be performed on the I/O cards. The routing information (changes) gathered from these processes must be forwarded over the distributed messaging system to the system control processing function, located on the system control and switch cards. This distributed messaging system operates over the dual star Gbe base fabric, previously described, and complies with the SAF specifications.
The system control and switch card is the central repository for the routing databases. From this information, the Forwarding Information Base (FIB) is created, which represents the best routes to any location based on both IGP and EGP routing information. The FIB can then be distributed, either wholly or partially, to the I/O cards, so that local routing decisions can be made, rather than constantly querying the central location. Many, or all, of the routes in the FIB will be programmed into hardware.
The hardware/software interface boundary between the network processors and the control plane is specified by the network processor forum CSIX specification and the IETF FORCES working group. This control plane information is used by the network processors to forward traffic. See Figure 3.
TRAFFIC ENGINEERING CALCULATIONS
The traffic engineering capabilities of the BRAS are handled in a similar way. The opaque LSA routing information is siphoned off into a Traffic Engineering Database (TED), rather than being held with the main Link State Database (LSDB). This database is used for traffic engineering calculations supporting both DiffServ mappings and MPLS tunnels. In the case of MPLS, the TED information is used by a Constrained Shortest Path First (CSPF) engine to calculate the best path through a network that can then be signalled by RSVP-TE. The CSPF calculations and the MPLS signaling can operate centrally or be distributed to the I/O cards for local processing. This is a design choice based on the number of calculations expected versus an additional memory/processor cost of the I/O cards.
In addition to the basic routing functions, configuration information must be distributed throughout the BRAS. This ranges from system-level functions to packet classification and filtering information. The establishment and distribution of policy information is a key element in networks of this type. It is often extensive and for some of the applications described above requires RADIUS capabilities.
One of the key advantages of a standards-based technology, such as AdvancedTCA, is that general compute blades can be introduced alongside data plane and control plane blades to perform additional functions. Depending on the configuration, it is perfectly possible to start with a small system containing all functional elements, and as requirements expand, move these to additional systems. In this situation, the benefits of an application-enabling platform incorporating both standards-based hardware and software platform layers become increasingly obvious.
The BRAS concept described above has been placed in the context of DSL access. However, it is easy to see that as new services are introduced, the range of features supported by a BRAS applies to any type of access technology. These could include cable, wireless, Wi-Fi and WiMAX. New services will include: multicast video and audio services incorporating video on demand; interactive gaming; network-based security features that reside alongside traditional voice and best-effort internet access.
To meet cost objectives, time-to-market pressures and preserve technology investments across successive generations of equipment, application-enabling platforms incorporating standards-based hardware and software functionality represent an economic alternative to traditional proprietary approaches.