Electronicdesign 23959 Cadence Dna100 Promo

Cadence’s Deep-Neural-Network Processor Pushes to 3.4 TMACs/W

Sept. 26, 2018
The Tensilica DNA 100 deep-neural-network processor weaves in Tensilica DSP support to manage new network layers.

Cadence extended its machine-learning (ML) offering with the Tensilica DNA 100 deep-neural-network processor (see figure), which incorporates Tensilica DSP support to handle new network layers. It targets end-node application such as autonomous vehicles, robots, drones, surveillance systems, and augmented and virtual reality where neural-network inference systems are being employed.

The Tensilica DNA 100 architecture—scalable from 0.5 to 12 TMACs (trillion multiply-accumulates)—can deliver up to 3.4 TMACs/W. Its sparse compute engine provides high MAC utilization while reducing power requirements. The sparse compute engine support can double, with no pruning, or triple, with pruning, the throughput of the system.

The Tensilica DNA 100’s sparse compute engine provides high MAC utilization while reducing power requirements.

The system shrinks bandwidth requirements via weight and activation value compression. To reduce computation, it only needs to address non-zero MAC computations. Accelerators are provided to handle non-convolution layer support, including pooling and Eltwise operations. The system is programmable, including the DSP support, to handle new software requirements; the architecture layers are customizable. The system is compatible with the Tensilica Instruction Extensions (TIE) and the DNA 100 has its own direct-memory-access (DMA) support.

Larger systems can be built using multiple DNA 100 processors on a chip. These are linked together using a network-on-chip (NoC) configuration; a chip-to-chip (C2C) link can be used to scale across chips.

“Our customers’ neural-network inference needs to span a wide spectrum, both in the magnitude of AI processing and the types of neural networks, and they need one scalable architecture that’s just as effective in low-end IoT applications as it is in automotive applications demanding tens or even hundreds of TMACs,” says Lazaar Louis, senior director of product management and marketing for Tensilica IP at Cadence. “With the DNA 100 processor and our complete AI software platform and strong partner ecosystem, our customers can design products with the high performance and power efficiency required for on-device AI inferencing.”

Software support includes the Tensilica Neural Network Compiler that works with prior Cadence ML platforms. The system includes a network analyzer and quantizer to 8- or 16-bit weights, a network optimizer, a DMA and tile manager, and target-specific library selection. The architecture is also compatible with the Android Neural Network App.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!