Electronic Design

Search Engines Take On Larger Forwarding Tables

Thanks to ternary CAM and algorithmic-based architectures, a pair of network search engines can handle larger searches.

Implementation of high-speed packet-search subsystems has become critical in equipment like routers and layer 2/3 switches. Requirements to perform policy lookups, namely access control list (ACL) and quality-of-service (QoS), require high-performance, flexible ternary content-addressable memories (TCAMs).

Other operations such as forwarding lookups, including virtual router forwarding and virtual private network applications (these typically require more than 1 million table entries), are better implemented using an algorithmic search solution. The algorithmic solution is more cost-effective for exact match and longest-prefix forwarding because those applications often use tables with more than 1 million entries.

Noting both system needs, designers at Cypress Semiconductor added TCAM-based and algorithmic-based search engines, which help simplify system design by reducing component counts. Not only do the Ayama 20000 TCAM and the Sahasra 50000 algorithmic search engines ease system complexity, they also lower system power requirements and overall system cost.

The Ayama 20000 can perform 266 million searches/s in 36-, 72-, or 144-bit memory configurations, 133 million searches/s in a 288-bit configuration, and 66.5 million searches/s in a 576-bit configuration. Included on-chip are two LA-1 ports (lookaside ports as defined by the Network Processor Form IA). Each port handles 128 contexts and supports either first- or second-generation quad-data-rate SRAM interfaces.

A single chip can support up to 256k 36-bit entries (or 128k 72-bit, 64k 144-bit, 32k 288-bit, or 16k 576-bit entries). Additional no-bus-latency SRAMs can be connected to expand the entry storage. A Fastlink interface enables multiple devices to be cascaded, expanding the number of policy entries up to 15 million and the number of forwarding entries to 20 million.

Up to four age-assist tables with single- or double-buffer aging, equipped with up to 256k entries, provide better network processor headroom. Also, fewer cycles are spent on SRAM aging writes, improving system performance.

The chip's advanced power management allows for software-enabled reduction in power consumption. The Ayama 20000 features the company's novel Mini-Key programmable search key to help conserve power. Error management in the form of parity support and error notification are packed on-chip. The TCAM includes automatic address generation for associated data and an interrupt and polling-based result interface as well. Single and dual LA-1 ported versions of the Ayama 20000 will be offered, each with memory options of 9 or 18 Mbits of on-chip storage.

Complementing the TCAM, the Sahasra 50000 algorithmic search engine can handle millions of entries. Designed to work with a packet processor and a policy table, it can perform 250 million searches/s. An on-chip storage capacity of 72 Mbits of SRAM lets it access as many entries as four 18-Mbit TCAMs, while consuming a fraction of the power. A suite of table management software, available as part of Cypress' Cynapse software platform, provides a turn-key solution conjunction with the chip.

The Ayama 20000 comes in a 1152-contact BGA package. It operates with a 1.2-V core supply and typically consumes about 8 W. It sells for $275 apiece in 10,000-unit lots, and samples are expected in the first quarter of 2004. Sahasra chip samples are slated for the second quarter of 2004. Housed in a 256-contact BGA package, it will cost about $400 in 10,000-unit lots.

Cypress Semiconductor Corp.

See associated figure

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.