Get The Most Out Of ATM Networks With Multicasting

April 6, 1998
Careful Trade-Offs Must Be Made In ATM Multicasting

Asynchronous Transfer Mode (ATM) has evolved from limited field trials to early volume deployment since the introduction of the first commercial product in 1993. There is a growing recognition in the industry that ATM connections to individual desktop computers will provide significant benefits to users including:

* A single network access point for all services

* Improved access to multimedia and video conferencing services

* Quality and simplified management of services

To increase the pace of ATM deployment, perceived barriers such as high cost and the lack of stable specifications for ATM services need to be overcome. The ATM Forum is moving rapidly to finalize key specifications that will make ATM the best option for multimedia applications. The specifications already finalized or soon-to-be finalized are:

* LAN Emulation

* Private Network Interfaces

* Multiprotocol Over ATM

* Traffic Management

* Switched Virtual Circuit

* Voice Over ATM

One common service that has become a business driver for the deployment of ATM networks today is the Internet. Recent dramatic increases in Internet access and usage have pushed up bandwidth demand and produced delays and congestion. ATM is an ideal solution for these applications, because it is inherently capable of providing the right quality of service (QoS) depending on the requirements.

An approach that will enable ATM networks to be the most efficient and cost-effective solution is to use the switching capability of the network more efficiently. Such a scheme, multicasting, is discussed in this article.

The major advantage of an ATM network is its ability to transport different types of signals through a single network using a standard cell format. Different classes of service imply various levels of cell priority, such as constant bit rate (CBR), for the transport of delay-sensitive traffic such as voice and interactive video; variable bit rate (VBR), for delay-insensitive traffic such as data; available bit rate (ABR), for non-time-critical traffic; and unknown bit rate (UBR) traffic.

Different modes of transmission demand support for unicast (point-to-point) and multicast (point-to-multipoint) transmission. Multicasting can be implemented as a series of multiple unicast links if the network is substantially under-used. But the recent dramatic increases in Internet usage will most likely overwhelm networks using the the unicast approach. It is more economical to use true multicasting than to add more bandwidth and use unicasting. Therefore, the ATM switching nodes will need to provide both priority queueing and multicasting features.

Multicasting is an efficient way of reducing the demand on the ATM network and switching bandwidth, thereby reducing the cost per connection. For example, if the ATM nodes do not have multicasting, a user sending an e-mail to coworkers 1, 2, 3, 4, and 5 must transmit five unicast copies of the same e-mail to five different destinations (Fig. 1a).

With multicast functionality, there are different ways the e-mails can be sent. One way is to send a copy to node B, from which it is multicasted to nodes C and E (Fig. 1b). Node C delivers the e-mail to coworker 5. Node E sends it to node D, where it is delivered to coworker 4, and passed down to node G, which delivers it to coworkers 1, 2, and 3.

Comparing Figures 1a and 1b, the unicast method uses 13 segments of the network while the multicast method uses only five (Table 1). Therefore, using multicasting in the above example results in a 62% savings on network bandwidth usage. Different amounts of network bandwidth would be saved if other paths are used. The trade-offs would be in the amount of delay and the probability of congestion.

Delay in a network is defined as the time it takes for the message to go from the sender to the receiver. The minimum delay is the sum of the time it takes for the message to travel across the links and the queueing time in the nodes. If any of the links in the path become congested, additional delay will be added until the link becomes available, assuming the message is not discarded due to congestion.

In normal operation, there should be no congestion, and the minimum delay is the absolute time it takes to traverse a path. In the e-mail example above, the use of multicasting sends the e-mail to coworker 1 through a four-node path versus a two-node path using unicast. As a result, the multicasting incurs more node and link delays. If L is the average delay through one node and one link, the average delay using multicast would be 4L. This is the minimum possible delay assuming congestion-free transmission.

The total delay is dependent on how many times the e-mail stalls in a node, and the length of the congestion. Today, there is not enough experience with ATM traffic patterns, especially in the Internet service area, to give a definitive figure. However, a first-order estimate is given here to show the potential impact of multicasting.

If PCM is the average probability of congestion for any node in multicast mode, and PCU is the average probability of congestion for any node in unicast mode, the probability of congestion-free transmission through the four-node path (P4F) using multicast is:

P4F = (1-PCM)4 (1)

The probability of congestion-free transmission through the two-node path (P2F) using unicast is:

P2F = (1-PCU)2 (2)

The probability of congestion in a network with multicast capability is smaller than with unicast only--in this example 62% lower. Therefore:

PCM = 0.38 x PCU (3)

Combining equations 1 and 3 yields:

P4F = (1 - 0.38PCU)4 (4)

A comparison of equations 2 and 4 is shown in Table 2.

For any given probability of congestion PCU, the chance of having a congestion-free transmission is always higher using multicast than unicast, even though with multicast, the path is twice as long.

Therefore, using multicasting increases the probability of congestion-free transmission. Alternatively, the network can support more users for a given desired probability of congestion. The trade-off is longer absolute minimum delay.

Within an ATM switching node, the cell processing is usually partitioned between the switching fabric and the interfacing line cards. Cells received on an incoming link are either stored in input buffers waiting to be routed to outbound links, or routed directly to output buffers. There are several types of ATM switch architectures which deliver various levels of price performance for multicast services. One of the concerns in switch node design is how effectively multicast and broadcast functions are supported.

Two options are: to queue up the cell in the input or output buffers within the switch fabric, or forward the cell to the queues in the interfacing line card. Although the first method results in very simple line cards, the disadvantages are an added memory requirement in the switch buffers, or an increase in blocking probability.

The advantages of the second method are lower memory requirement and less blocking probability in the switch fabric. On the other side, the disadvantage is the higher degree of complexity in an interface card design, such as the requirement for multiple virtual-circuit (VC) address translation before sending the cell to the multicasted terminals. The implementation of VC address translation requires a large amount of memory in the form of either a hashing function or content addressable memory (CAM).

Unfortunately, the current ATM physical layer standard (UTOPIA) favors a simple interface card design that does not support either multicasting or priority queuing.

There are many ways to provide support for multicasting in both single- and multi-PHY UTOPIA. The key is to transmit the data with the multicast information, i.e. which channels will receive the cell. Two methods are presented here. The direct method uses an inband multiport indicator (Fig. 2). Each bit location is associated with a port. For example, the presence of a 1 in any bit of the 6-bit multiport indicator indicates that the corresponding port is selected. The inband multicast indicator can be carried anywhere in the ATM cell header. Figure 3 shows an example of using the HEC byte to carry the multicast indicator.

Another method of transmitting multicast information uses indirect addressing. The inband multiport code is used as an index to an internal RAM table that stores the actual multiport indicator (Fig. 4.) This method allows multiple communication sessions simultaneously.

To ensure that cells containing voice or video information are not delayed by less urgent messages, priority queueing is necessary. Usually four queues per port are enough to support CBR, VBR, ABR, and UBR traffic.

One way of providing priority in UTOPIA is to encode the priority inband. Priority can be carried in any two bits within user-selectable locations in the ATM header. For these two priority bits, 00 might represent the highest priority, while 11 represents the lowest priority. Cells in the highest priority queue are sent as output first until the queue is empty, before a cell from the second priority queue is sent, and so on for all four queues.

A major challenge in communications system design is accommodating continuous changes in customer needs. One such need is the 25 Mbit/s ATM interface to the desktop. To add a 25 Mbit/s interface capability to an ATM switch node, the engineer must decide how many lines terminate in an interface card, which functions are necessary, and what level of loading is required for the switch fabric.

One of the most popular transport signals today is the OC-3 SONET/SDH signal. The OC-3 signal uses a 155.52 Mbit/s line rate that can carry a maximum ATM cell payload rate of 149.76 Mbits/s. The maximum ATM cell rate that can be carried in a 25 Mbit/s ATM signal is 25.126 Mbits/s. Six channels of 25 Mbit/s ATM signals would have a maximum cell rate of 150.75 Mbits/s, just 0.66% over the OC-3 maximum cell rate. Therefore, a six channel 25 Mbit/s ATM concentrator with some buffering is a logical and economical way to supply ATM to a community of users (Fig.5). This design uses simple line physical layer devices that provide the basic transmission-convergence (TC) sublayer and physical-media-dependent (PMD) sublayer functions.

Figure 6 shows another example of an ATM concentrator where the multicasting and priority capabilities are added to the physical-layer device. TranSwitch has successfully implemented a 208 pin PQFP VLSI device, SALI-25C, to provide the transmission convergence sublayer function, with up to 4000-cell buffering, multicasting, and multipriority for use in this type of concentrator.

Due to its built-in QoS capabilities, ATM is an ideal technology for multimedia applications. The cost effectiveness of ATM can be improved further by using newer technology, integrating more functions in a single VLSI device, and adding features such as multicasting and priority queueing in the ATM network. The implementation of the multicasting and priority functions in an ATM switch node requires careful consideration of the switch node's internal architecture, the minimum transmission delay requirement, and the desired probability of congestion in the transmission link.

Even though the current ATM physical-layer standard, UTOPIA, lacks support for multicasting and priority, there are ways to supplement the UTOPIA standard to provide these two functions in the physical layer. This allows the system designer the freedom to place these functions in the physical layer, the ATM layer, or in the switch fabric.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!