Electronicdesign 7891 1214trcommpromo
Electronicdesign 7891 1214trcommpromo
Electronicdesign 7891 1214trcommpromo
Electronicdesign 7891 1214trcommpromo
Electronicdesign 7891 1214trcommpromo

Virtualizing Everything On The Network

Dec. 9, 2014
Welcome to the virtualized network. Everything from packet processing to storage management has moved into virtual machines that can be deployed as necessary.

Remember the relatively simplistic days of networking life? PCs functioned with hard disks and network cards, and those network cards were connected to network hubs or switches. Another box usually featured a firewall or gateway.

Many data centers have been built around this approach, often with 1U rack-mount systems (Fig. 1). A node incorporated processor, memory, storage, and a network interface in each 1U system. These were linked together and to the outside world using a top-of-rack (ToR) switch. Ethernet typically served as the network interface, although some systems employ other technologies like InfiniBand.

1. The typical rack-system configuration links a bank of 1U servers to the outside world using a top-of-rack switch. The servers combine processing, storage, and networking support in the 1U package.

Of course, this approach comes in many packaging variations, including blade servers. Larger form factors also provide more room for more robust nodes, with more storage, memory, and processing power as well as space for other computational units like GPUs.

One other common component is the power supply, which typically provides power for all components within the system. These requirements differ depending on the technology and devices.

2. Splitting compute and storage offers one way to provide more modularity. NAS or SAN platforms provide storage to compute-only nodes that typically run virtual machines.

Splitting out storage offers a way to create a more modular environment (Fig. 2). In this case, a storage area network (SAN) or network attached storage (NAS) provides storage support for compute nodes that usually have only a processor and memory plus a network interface that ties everything together. Additional security and throughput can be attained with additional network interfaces for storage, isolating the network links to the outside world. Protocols like iSCSI provide remote block storage access to the compute nodes.

Rack scale architecture (RSA), or rack disaggregation (RD), takes the next step, splitting out all of the components and linking them via a high-speed fabric (Fig. 3). Fiber-optic connections are often considered because of higher bandwidth and extended distance compared to the copper alternative. The biggest challenge involves latency. One technology addressing this space is Intel’s Silicon Photonics.

3. Rack scale architecture (RSA) groups processing, storage, and networking support into separate regions linked by a high-speed fabric. This allows for more modular allocation of resources.

Density of RSA compute nodes is designed to be higher than other compute systems. Most of the RAM gets moved to a separate area, where it can be allocated as needed to the compute side. Of course, latency is an issue, which usually means larger caches. Fabric-oriented memory controllers remain on the drawing board, while another option still considers keeping processors and RAM together.

Cloud service providers (CSPs) such as Amazon Web Services (AWS) are interested in these platforms because of their massive investment in hardware. RSA provides a way to improve scalability as well as migration to new hardware. It offers the advantage of changing different components when deemed necessary, which is valuable considering technologies like processors, memory, storage, and networking change at different rates and times. 

Hardware options aren’t the only things changing in the data center. Network virtualization can be implemented using any of these platforms.

Virtualizing the Network

“Software defined” is the latest trend in the virtualization arena. For instance, the software-defined data center (SDDC) built on the Software-Defined Infrastructure (SDI) can include software-defined networking (SDN) and software-defined storage (SDS). Of course, Network function virtualization (NFV) is also in the mix (see “10 Things You Should Know About NFV”).

SDDC targets RSA with fast, efficient hardware that can run just about everything in software. Hardware acceleration handles elements like Ethernet protocols, but the idea is to use stock hardware to deal with networking and storage chores, including challenging applications such as deep packet inspection (DPI) and network routing functions.

SDI is built out of virtual machines (VMs) that run on pools of compute, storage, and network nodes (Fig. 4). It targets RSA, but the approach works even with 1U server stacks. Running applications in VMs is already the norm. With SDI, however, everything is a VM, including network routing. The latest processor hardware can handle the control and data planes.

4. Virtual switching links application and NFV VMs on the data plane. Orchestrators manage the overall environment.

Many use Intel’s Data Plane Development Kit (DPDK) to implement SDN and build NFV services. DPDK includes libraries built on a multicore framework for Intel x86-based processors. Moreover, the latest processors and Intel Ethernet controllers have been optimized for DPDK support. Though DPDK isn’t the only solution addressing the SDI, it’s one of the more notable options.

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

SDN is still built on switching hardware, but it can handle L2 through L4 and offers configurability. Since it doesn’t deal with the higher-level protocols, it tends to be faster and more economical. Two major switch platforms are Broadcom’s StrataXGS Tomahawk series and Cavium’s XPliant.

The StrataXGS Tomahawk series can deliver 3.2 Tb/s with 32 ports of 100G Ethernet, or 64 ports of 40G/50G Ethernet, or 128 ports of 25G Ethernet. It supports OpenFlow 1.3+ using Broadcom’s OF-DPA (OpenFlow-Data Plane Abstraction). Overlay and tunneling support tackle VXLAN, NVGRE (network virtualization using generic routing encapsulation), MPLS (multiprotocol label switching), and SPB (shortest path bridging).

The 28-nm XPliant CNX88091 Ethernet switch chip also offers 3.2 Tb/s with 128 ports of 25G Ethernet. Other port combinations include 32 ports of 100G Ethernet. XPliant supports overlay and tunneling standards such as OpenStack and OpenFlow, and will support future protocols and standards like Geneve.

NVGRE uses Generic Routing Encapsulation (GRE) layer 2 packets through layer 3 networks. MPLS adds short path labels to packets so that they can be routed without using longer network addresses. Geneve addresses how traffic is tunneled in an SDN environment, and is supported by major VM players such as Microsoft, Red Hat, and VMware. An SDN environment will typically contain a mix of protocols.

These two switch platforms handle the various protocols using five major components: parsing, lookup, modify, queue, and count. Parsing examines incoming packets. Lookup determines where and what should be done with a packet. Packet modification depends on the protocol involved. Queue handles buffering of packets. Finally, count provides the statistics about the system. Typical switches only count packets, bytes and errors, whereas these new platforms provide insight into more traffic and system details.

Both platforms are more generic than the current crop of routers. The switches can be used for leaf/ToR and spine portions of a network, although these often differ in terms of port configurations.

Virtualized Network Functions

Network function virtualization, or NFV, places functions that were often found in network appliances (services like firewalls and VPN gateways) into virtual machines. The advantage of NFV is scalability and management. The challenge, however, was to make the VMs match the reliability of dedicated hardware (see “Achieving Carrier-Grade Reliability with NFV”). Luckily, VMs can easily handle high availability.

The change away from old-style switches and network appliances gives vendors and users more flexibility. It’s very easy to scale functions such as firewalls and to link one NFV VM to another—a task that was previously done with cables. As the load increases, additional VMs can be started. Load balancing is another well-understood function. NFV VMs address Level 4 to Level 7 services.

VMs offer additional security, but they alone don’t make a good security policy. That said, companies like HyTrust provide policy-based security at the VM level. These systems deliver the kind of virtual security that matches the type of physical security used in the past. Physical access to servers and switches tends to be very limited, and for good reason. Security of virtual assets is even more important due to the ease with which these files can be copied and modified.

In general, a VM is a VM, and these will be connected to network and storage resources. A VM can be given access to these resources while being isolated from other VMs and resources within the cloud.

Virtual Storage

In the beginning, there was block storage, on top of which file systems were built. Hard disks and now flash memory constituted the primary base. Virtualization mirrored block-device functionality, providing virtual versions of the disk controllers. Behind the scenes, VMs in an SDI environment use local files or block storage, as well as network protocols like iSCSI, to add storage. An iSCSI target, often a VM with access to physical devices, services any number of iSCSI initiators (or clients).

Designers also must contend with the plethora of new storage technologies. For example, Diablo Technologies Memory Channel Storage (MCS) puts flash memory onto the processor’s memory channel (see “Memory Channel Flash Storage Provides Fast RAM Mirroring” on electronicdesign.com). The technology is found in products like SanDisk’s ULLtraDIMM. Diablo’s NanoCommit technology is designed to mirror DRAM in flash.

Another non-volatile dual in-line memory module (NV DIMM) solution comes in the form of NV-DRAM. Products like Viking Technology’s ArxCis-NV blend DRAM and flash on a single DIMM. A supercapacitor provides power to an on-module processor to copy DRAM to flash if power fails. Flash contents are restored to DRAM after powering on a system. In addition, other non-volatile technologies have begun to emerge in the DRAM socket space. Technologies such as MRAM have found a home in embedded memories until now.

New approaches to storage management include persistent memory file systems (PMFS). This method is designed to take advantage non-block non-volatile storage.

Another technology found in virtualized solutions is Seagate’s Ethernet-based Kinetic disk drive (see “Object Oriented Disk Drive Goes Direct To Ethernet” on electronicdesign.com). This drive has a pair of 1-Gigabit-Ethernet interfaces. Instead of conventional block storage, it uses a key/value model, which has been integrated into many “big data” technologies (e.g., Hadoop).

Maxta’s MaxDeploy architecture provides a VM-centric storage system that even works with existing 1U server configurations. It presents a single virtual interface to storage that’s typically spread across nodes containing compute and storage.

No one configuration addresses all installations, but a migration toward SDI and SDDC is underway. Service providers and enterprise IT utilize large systems that can take advantage of SDN and NFV. Adoption is on the rise as the technologies continue to retool themselves. 

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!