Storage-area networks (SAN) and network-attached storage (NAS) move storage out of the box and onto the network. The Internet Small Computer System Interface (iSCSI) standard takes the SCSI standard and overlays it on the TCP/IP networking stack (Fig. 1), inserting SCSI commands and data into TCP/IP packets (Fig. 2).
The idea is to make it easy for operating systems that already handle SCSI hard disks to use remote storage by providing the same SCSI interface regardless of the kind of storage on the other side of the device driver. However, the big problem is overhead and performance.
SCSI simplifies the operating system's job. But adding a high overhead transport like TCP/IP will induce load using a traditional software implementation. Of course, TCP/IP lets users take advantage of Ethernet infrastructure and low-cost Ethernet switching hardware. Still, the iSCSI processing chore becomes more difficult as the network speed increases.
This is where components like Silverback Systems' iSnap2200p comes into play (Fig. 3). The iSnap2200p is the big brother of the iSnap2110, which has a pair of 1-Gbit/s ports and a PCI-X interface. The iSnap2110p still is a good choice for low-end servers and workstations.
Yet the iSnap2200p targets the high end where 10-Gbit/s Ethernet comes into play. The big problem is that 10-Gbit/s Ethernet can bring a server to its knees with a relatively dumb Ethernet controller. This is why the iSnap2200p's underlying platform is a TCP/IP offload engine (TOE).
The iSnap2200p is effectively an Ethernet adapter, and its TOE is available for communication chores. But its primary use is to augment iSCSI traffic. Much of the work is done via the classification engine, which examines incoming packets. Offchip SDRAM memory is used to store and process incoming packets. The SRAM interface handles system data structures.
The chip can be an iSCSI initiator or a target, so it's likely to be found at both ends of the connection. The iSnap2200p is driver-compatible with the iSnap2110, so many networks may have a mix of iSCSI chips.
There are four core processors and a CRC engine that can handle up to 300k SCSI requests/s. This is important because SCSI commands
aren't simply read/write operations. The additional processing on-chip alleviates the host from performing the same chores. This reduces host overhead, allowing the host to dedicate more time to handling storage hardware.
WHERE IS THE STORAGE?
The iSnap2200p doesn't interface directly to storage devices, which is why there are no hard-disk interfaces like Serial ATA (SATA), Serial Attached SCSI (SAS), or even SCSI. This isn't unusual. Other SAN solutions like Fibre Channel and InfiniBand take a similar approach with a host processor sitting between the target storage interface chips.
Typically, the host processor has builtin-interfaces or uses off-chip interfaces for storage devices. These days, SATA, SAS, and Fibre Channel interfaces are the norm, replacing SCSI and IDE interfaces. The host processor usually isn't the bottleneck, given the speed and throughput of the hard disks. The host processor often handles additional chores such as providing RAID support, so it's important that the host isn't overloaded by network traffic management.
The x4 PCI Express host interface has significantly more bandwidth than the PCI-X interface on the iSnap2110. It supports advanced error reporting features and incorporates high-performance DMA engines.
Moving data back and forth via network packets isn't the only thing the iSnap2200p can do. With its new remote direct memory access (RDMA) support, data can be placed or copied from the appropriate memory on a remote client without significant overhead.
This is similar to the reduction in host overhead when using DMA with local devices. RDMA requires support at both ends of the connection. But it has proven to be of significant value for InfiniBand systems where RDMA is already in use.
Of course, RDMA support isn't the same thing as SCSI support. It's a slightly different programming model, so the application and operating system must be set up to take advantage of RDMA. RMDA likely will play a much bigger role as platforms such as the iSnap2200p are deployed.
Ethernet has a significantly higher overhead compared to InfiniBand, so it isn't likely to encroach on applications like supercomputing. Still, Ethernet-based iSCSI with RDMA support definitely will have a positive impact on SAN and NAS networks.
The iSnap2200p includes hardware support for IPsec, an Internet Engineering Task Force (IETF) encryption standard. Unfortunately, encryption implementations tend to be somewhat variable. Taking advantage of the IPsec support will depend on how you use the iSnap2200p.
At this point, developers will work with Silverback Systems to see how this support can be utilized. The hardware can process up to 1 Mpacket/s. It also can handle 3DES + SHA1/MD5 encryption at up to 2 Gbits/s and AES-CTR and AES-XCBC at up to 4 Gbits/s.
The 32-bit MIPS 4KEp general-purpose processor handles IPsec Key Exchange (IKE) instead of the core processors. It can run from flash or RAM. Having this processor handle the key management is useful because this aspect of IPsec imposes little computational burden compared to raw Ethernet throughput or the encryption/decryption process. It operates at up to 200 keys/s.
Encryption support is useful, but it isn't always a big factor since many SAN implementations are well behind the corporate firewall where encryption isn't used. Some environments keep SAN even further from the outside world by having a dedicated SAN network connecting computational servers to storage servers.
The iSnap2200p is available in a 31- by 31-mm, 696-pin plastic ballgrid array
(PBGA) package. It uses 5.7 W with IPsec support enabled and 5 W without. It
requires a single 25-MHz crystal. The chip costs under $65, so host bus adapter
(HBA) pricing is expected to be $350 to $400. A two-port, 1-Gbit/s version is
available. The iSnap2110 is still available at $50.
Functionality: Link, iSCSI, TOE, RDMA
Ports: 10-Gbit Ethernet (XAUI) or quad 1-Gbit Ethernet (SGMII)
Interface: x4 PCI Express
Features: IPsec hardware acceleration
CPU utilization: 15%, iSCSI 800 Mbytes/s, using 64-kbyte blocks
Package: 696-pin PBGA
Power: 5 W, 5.7 W with IPsec
Technology: 130-nm CMOS
Pricing: under $65