Multicore Processors Manage Massive Military Data Flows

June 22, 2012
Military application require enclosures to handle rugged environments but they need major computing resources. This often means designing systems that employ FPGAs and GPUs in addition to multicore CPUs.

Intelligence wins wars. That’s why the U.S. Department of Defense (DoD) has launched the Global Information Grid. This major communications project will deliver end-to-end capability for collecting, processing, and storing data and providing services—and there’s a lot of data to manage.

As part of the GIG project, the Defense Airborne Reconnaissance Office (DARO) developed the Distributed Common Ground System (DCGS) to integrate all ground and surface systems. This includes functions such as imagery intelligence (IMINT) ground/surface sustems in the Common Imagery Ground/Surface System (CIGSS) architecture. DCGS also handles measurement and signature intelligence (MASINT) and signals intelligence (SIGINT) information.

All of this data comes from a massive number of resources, from unmanned aerial vehicles (UAVs) to the troops on the ground. Likewise, radar and other data acquisition systems collect even more information, adding to the computational burden.

Packaging Key To Rugged Systems

For most field work, size, weight, and power (SWaP) are the watchwords for designers. But these demands must be balanced against performance, capacity, and functionality. Improvements in semiconductors have significantly boosted performance, capacity, and functionality while minimizing power requirements, enabling solutions that are lighter and more compact.

High reliability is key in rugged environments. Designers must meet tougher environmental requirements than even automotive applications. Extreme heating, cooling, vibration, and other factors need to be addressed. Maintenance, including in-field maintenance, also is critical in military applications. Field replaceable units (FRUs), also known as line replaceable units (LRUs), often are used to make field maintenance easier.

Aeronautical Radio Incorporated (ARINC) pioneered one notable FRU form factor, the ATR box, which often is described as the Air Transport Rack or Austin Trumbull Radio. Full-size ARINC 400 ATR boxes are very common in military designs. The ARINC standards address a range of issues, not just form factors. An ATR Full Long box (10.12 by 10.625 by 19.6 in.) has room for a dozen 6U boards.

At one time, a dozen 6U boards were needed to implement many subsystems. These days, 3U systems are more common, allowing more compact designs. They might include smaller ATR boxes that are often referenced as a fraction like ½ ATR. Some systems are even smaller.

The Extreme Engineering Solutions XPand4201 is a sub-½ ATR, forced-air-cooled enclosure for conduction-cooled modules (Fig. 1). Its sidewall heat exchangers increase cooling. Kontron’s ApexVX ½ ATR box handles five 3U slots (Fig. 2). It supports Kontron’s VXFabric and VXcontrol Smart Technology. These boxes target command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) applications such as UAVs, unmanned ground vehicles (UGVs), aircraft, tanks, and high-mobility multipurpose wheeled vehicles (HMMWVs).

1. The Extreme Engineering Solutions XPand4201 is a sub-½ ATR, forced-air-cooled enclosure for conduction-cooled modules. It weighs less than 19 lb.
2. Kontron’s ApexVX is a ½ ATR enclosure that supports five 3U slots.

Elma’s 3U VPX Mini ATR supports up to three 3U slots (Fig. 3). It supports an OpenVPX VITA 65 backplane (see “OpenVPX Simplifies Rugged Design Tasks”). The system can be configured with solid-state storage and a 250-W power supply. The Aitech Defense Systems A190 RediBuilt Integrated COTS (commercial off-the-shelf) Computer is even smaller (Fig. 4). It fits a pair of 3U boards and supports OpenVPX. Like many ATR enclosures, it uses standard, circular MIL-DTL-38999 I/O connectors.

3. Elma’s 3U VPX Mini ATR is only 133 by 180 by 250 mm. It supports up to three 3U slots.
4. The Aitech Defense Systems A190 RediBuilt Integrated COTS Computer is a 1/4 ATR Short enclosure that supports a pair of 3U slots.

Platforms like the Global Hawk and Predator UAVs can handle multiple instances of the larger form factors while smaller platforms would use one or two compact cases like Elma’s or Aitech’s. Sometimes platforms are delivered in different form factors, like GMS Computing Engine’s SZC91X (Fig. 5). The SZC91X is a rugged, secure server based on a 2.4-GHz, six-core, Intel Westmere Xeon processor. It includes TPM support for security. Also, it has two internal, removable storage slots and eSATA ports for external storage.

5. The GMS Computing Engine SZC91X packs a 2.4-GHz, six-core Intel Westmere Xeon processor. It has the typical server peripheral complement including TPM support and two removable storage slots.

Changing System Boards

VME CompactPCI boards have been the mainstay for military and industrial systems. However, the latest serial bus boards like VPX and CompactPCI Serial provide higher I/O bandwidth than the parallel bus interfaces. Still, high-speed I/O bandwidth isn’t always a requirement, and legacy systems often need processing power that new boards can provide.

MEN Micro’s A21 6U VME board runs a dual-core, 1.67-GHz P1022 PowerPC QorIQ from Freescale (Fig. 6). It has a pair of PMC/XMC slots. The XMC slots support PCI Express. The board supports Gigabit Ethernet, microSD, and mSATA storage. It’s comparable to OpenVPX boards but with a parallel system bus.

6. MEN Micro’s 6U A21 VME board runs a dual-core, 1.67-GHz P1022 PowerPC QorIQ from Freescale.

Likewise, the Concurrent Technologies 6U CompactPCI PP 93x/x1x single-board computer (SBC) uses the latest 22-nm quad-core Intel Core i7-3612QE 2.1-GHz processor (Fig. 7). The PCI backplane may not be the fastest thing on the market, but the processor and system are at the high end. It supports 16 Gbytes of error-correction code (ECC) SDRAM.

7. The Concurrent Technologies 6U PP 93x/x1x CompactPCI SBC uses the latest 22-nm, quad-core Core i7-3612QE 2.1-GHz processor from Intel.

Like MEN Micro’s board, the PP 93x/x1x has PMC/XMC sites. But while the A21 has a pair of x1 PCI Express links for the XMC site, the PP 93x/x1x supports x4 and x8 PCI Express lanes. Using XMC sites is one way to get access to high-performance devices without changing the backplane. PMC sites are often easy to support depending upon the processor’s hub controller.

Another way to mix parallel and serial bus interfaces is to use the backplane standards that mix the two. This includes VXS (VITA 41), which blends VME and high-speed serial interfaces. CompactPCI Plus builds on CompactPCI and adds PCI Express as well (see “Embedded PCI Express Still Emerging”).

VPX (VITA 46) defines a wide range of configurations and interfaces including PCI Express, Gigabit Ethernet, Serial RapidIO, and InfiniBand and backplane configurations. A VPX board will support one kind of high-speed serial interface, but it may also include a Gigabit Ethernet control plane. VPX also defines backplane connections for analog/RF connections (VITA 67) and fiber optics (VITA 66) that are for peripheral interfaces instead of board-to-board connections (see “Rugged Technologies Make Military Hardware Tougher”).

OpenVPX (VITA 65) was designed to simplify VPX by defining commonly used subsets (see “OpenVPX Simplifies Rugged Design Tasks”). Most VPX boards built these days are designed to fit the OpenVPX standard.

Mercury Computer Systems’ 6U HDS6601 rugged compute blade-based board is designed for intelligence, surveillance, and reconnaissance (ISR) and electronic warfare (EW) applications (Fig. 8). It can handle a pair of eight-core Intel Xeon E5-2648L processors. The board also supports up to 32 threads. It uses an FPGA with Mercury’s Protocol Offload Engine Technology (POET) to provide backplane interfaces. The FPGA can handle any of the VPX backplane interfaces like Serial RapidIO, 10-Gbit Ethernet, and InfiniBand.

8. The Mercury Computer Systems 6U HDS6601 SBC has a pair of eight-core Intel Xeon E5-2648L processors. An FPGA supports Mercury’s Protocol Offload Engine Technology (POET).

The OpenVPX 3U form factor is getting a lot of use because it has sufficient backplane connections for the high-speed serial interfaces and supports significant processor performance. Kontron’s VX3042 and VX3044 SBCs deliver 10-Gbit Ethernet and PCI Express 3.0 support in a compact package (Fig. 9). They targets high-performance embedded computing (HPEC) applications like radar with Intel’s 2.1-GHz quad-core Core i7-3612QE processor. The boards have 16 Mbytes of soldered ECC DDR3 SDRAM. The integrated Intel HD Graphics 4000 supports three DisplayPort interfaces.

9. Kontron’s 3U VPX VX3042 and VX3044 SBCs deliver 10-Gbit Ethernet and PCI Express 3.0 support.

AdvancedTCA boards are larger and normally found in communication applications. Some have found their way into military and avionic applications. Emerson Network Power’s ATCA-7370 AdvancedTCA board has two 1.8-GHz, eight-core Intel Xeon processor E5-2648L processors and dual 10-Gbit Ethernet interfaces (Fig. 10). Its 36 lanes of PCI Express 3.0 can handle a new 6- by 10-Gbit Ethernet termination module.

10. 4DSP’s 3U VP680 VPX board supports its Xilinx Virtex-6 FGPA with an FMC header.

Power architecture processors like Freescale’s QorIQ are commonly found in military and avionic applications, but Intel x86 processors have become very popular. The latest batch of Intel processors supports Intel’s AVX (Advanced Vector Extensions), which has been very useful (see “Intel’s AVX Scales To 1024-Bit Vector Math”). AVX allows the Intel processors to address DSP and multimedia applications common in military and avionic applications. Of course, FPGAs and GPUs also address high-performance applications.

FPGA And GPGPU Change Design Choices

FPGAs—especially large, high-performance FPGAs—have been common in demanding military and avionic applications. They often are used to handle high-speed I/O and parallel processing chores. The FPGA Mezzanine Card (FMC) is a mezzanine interface designed to connect interface I/O such as analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). 4DSP’s VP680 3U VPX board includes an FMC header and a Xilinx Virtex-6 FGPA (Fig. 10).

Of course, interface cards aren’t all that can be used with an FMC socket. Bittware's AA-FMC (Fig. 11) includes four Anemone104 (AN104) processors based on the Epiphany architecture from Adapteva (see “Multicore Array Targets Embedded Applications”). It is a floating-point coprocessor for an FPGA that provides more efficient floating-point support than implementing it within the FPGA.

11. Bittware’s AA-FMC includes four Anemone104 (AN104) parallel processors based on the Epiphany architecture from Adapteva.

GPUs handle more than graphic display chores these days. They are sometimes called general-purpose GPUs (GPGPUs). The GPUs are programmed using programming interfaces like CUDA and OpenCL. Nvidia originally developed CUDA for its GPUs. It is now available with low-level virtual machine (LLVM) support that can target any platform (see “GPU Architecture Improves Embedded Application Support”). This is akin to OpenCL, which also can target a range of platforms, including CPUs.

GPUs often are found on SBCs for display purposes. The GE Intelligent Platforms PMCCG1 PMC card has an S3 2300E GPU (Fig. 12). It supports OpenGL as well as Microsoft’s DirectX. GE Intelligent Platforms’ GRA111 3U VPX graphics board features an Nvidia GT 240 GPU (Fig. 13). The GT 240 has 96 CUDA processing cores.

12. The GE Intelligent Platforms PMCCG1 PMC card has an S3 2300E GPU.
13. The GE Intelligent Platforms GRA111 is a 3U VPX graphics board with an Nvidia GT 240 GPU.

GPUs provide hundreds of processing cores, but they tend to be power hungry and generate a lot of heat. This can be a challenge in rugged conduction-cooled environments. A CPU normally is involved in initiating GPU jobs and processing the results.

Nvidia’s Kepler architecture pushes the envelope. Its dynamic parallelism allows GPU jobs to spawn additional jobs without interacting with the host CPU. The Hyper-Q feature enables Nvidia Kepler GPUs to run from up to 32 task queues. Prior Nvidia GPUs used a single queue.

The Kepler GPUDirect feature is designed for multiple-GPU systems. It permits a GPU to move data from its local memory to the memory of another GPU when multiple GPUs are PCI Express peers. It can also be done over networks with compatible GPUDirect Ethernet adapters. Mellanox’s ConnectX-3 adapters support this feature as well as InfiniBand (Fig. 14).

14. Mellanox’s ConnectX-3 adapters support Nvidia’s GPUDirect feature found in Nvidia’s Kepler GPUs.

Military and avoinic applications require a wide range of platforms. The applications often require custom support, but standard platforms like VPX provide a COTS starting point.

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!