Ethernet continues to reign atop the list of high-speed interconnects, even as it moves into the 10-Gbit/s space (see "High-Speed Serial Technology Drives Board Interconnects" at www.electronicdesign.com, ED Online 15348). Combine this with the emerging PCI Express I/O virtualization, and the result is Intel's new 82598 10-Gbit/s Ethernet controller and its cousin, which uses the 1-Gbit/s 82575 chip (see "I/O Virtualization," ED Online 15358).
As part of Intel's I/O Acceleration Technology, these adapters specifically target servers where virtual-machine monitors (VMMs) will host operating systems in multiple virtual machines (VMs).
With current technology, the VMM manages shared peripherals instead of having the VM access the peripheral directly. That's because current adapters provide no I/O virtualization support. Instead, a virtual peripheral is delivered to the VM, which creates a performance and management bottleneck. The solution is hardware I/O virtualization.Hardware I/O virtualization Hardware I/O virtualization isn't new, but its implementation at the PC level is. Part of the reason for this lag is the cooperation required across the hardware. In particular, device adapters must be part of the solution by providing virtual interfaces that are independent of each other from a control standpoint (Fig. 1).
In Intel's case, the 82598 offers 16 virtual-machine device queues (VMDq). This maps well to the 32 transceiver and 64 receiver physical queues per port. Each VM using the adapter will be allocated one VMDq. An operating system in a VM will use the virtual device as it would a real device.
The key difference versus that of a virtual device provided by the VMM is that the operating system will communicate with the Ethernet adapter directly. Also, security measures are in place so that the VM can't get outside its memory sandbox. Essentially, the VM and the hardware will be able to share only the VM's memory, even though the operating system thinks it has access to all physical memory.
Additional hardware that handles address remapping between the device and physical memory control the DMA support used by the network adapter. The support is comparable to that found within a VMM-enabled processor, which doesn't allow a VM to access memory outside its restricted arena. In fact, the VMM will typically combine its control of a VM and any hardware virtualized devices, such as Intel's Ethernet adapters.
The other piece of the puzzle is interrupt splitting, because the hardware must now notify a particular VM for most interrupts versus the VMM. The hardware needs to differentiate between a VM that's currently running, possibly on a different core each time, versus one that's active but not currently running.
In the former case, the interrupt can be delivered to the VM. In the latter, the VM needs to address the interrupt when the VM begins running again. This is analogous to software interrupts for applications, but the implementation tends to be significantly more complex.
Features like direct virtual interrupt and DMA support are usually implemented for performance reasons, but other areas can benefit, too. For example, transmit fairness and head-of-line blocking can be implemented more easily in hardware with the virtual support incorporated into the chips.Splitting ethernet chores Virtualization is a major feature of the 82598 and 82575 chips, but it isn't the only feature. The chips don't incorporate a full TCP/IP offload engine (TOE), but the TOE still does more than the typical Ethernet adapter. Like the virtualization, it's tuned for multicore processors.
One interesting feature is the ability to split or replicate incoming packets (Fig. 2). Typically, packet processing is based on header information that includes routing data like the Internet Protocol (IP) destination address. Deeper packet processing is possible. But usually when data arrives at a host node, an application will handle this level of filtering.
Header size is often a small fraction of a packet, and the two are often separated for processing. This allows a larger amount of header data to fit more efficiently into a processor's cache. So, in this case, what was once the application code's responsibility is now handled by the hardware.
Another significant feature is receiveside scaling, which distributes interrupts and loads across multiple cores. This type of load balancing can make more effective use of multiple cores.
Check out the tables for more details on the chips' other features. Chip features are usually situated on pre-virtualized hardware, including the Network Controller-Sideband Interface (NC-SI) and SMBus, remote boot loading via PEX and iSCSI, and support for range of standard Ethernet interfaces.
The chips and adapter can support any compatible PCI Express-based host processor. However, some features will only be available with Intel-based processors because of the interaction between various hardware components. Designers will need to keep a close eye on compatibility with the host processors.