(Image courtesy of Thinkstock).

Upmem Moves Processing Into Memory for Analyzing Finances and Human Genes

Sept. 25, 2017
To get around the memory wall, the semiconductor start-up is looking to process information inside memory, which could vastly improve bandwidth and latency.

The semiconductor industry is up against a memory wall. The performance of computer processors is limited by the amount of time it takes to retrieve data from external memory, capping their performance. That could be a major issue for the future of computing.

But to get around that memory wall, a semiconductor start-up is looking to process information inside memory, which could vastly improve bandwidth and latency. Upmem recently raised $3.6 million to continue working on its processing-in-memory technology, which has long been a fringe concept in the chip industry.

“When Moore’s Law was strong, and provided twice the processing power every 18 months, it was useless to try something else,” said Jean-Francois Roy, Upmem’s chief operating officer, in an email. “Now, we see that all the big players in the data center are looking at heterogeneous computing because they need a complementary solution for big data.”

Founded in 2015 and partially funded by Western Digital, the company is starting to get its chips ready for manufacturing. The chips could tackle machine learning tasks more efficiently by skirting the memory wall and improve algorithms that analyze financial data to trade stocks and human genes to personalize drugs.

In its chips, Upmem combines main memory with thousands of DRAM processing units, which sit directly next to data. The specialized silicon is linked to a traditional processor, which runs an operating system and offloads tasks to the battalion of DPUs. The result is 20 times better performance without using any additional power, Roy said.

Upmem is building a coprocessor for server chips from Intel and Advanced Micro Devices, not an accelerator for machine learning tasks in the style of Google and Nvidia. The company plans to package 16 chips into DIMM modules, which can be inserted into standard motherboard slots, Roy said.

The idea is not exactly new. David Patterson – a titan of reduced instruction set computing and an architect of Google’s tensor processing unit – proposed what he called intelligent random-access memory over two decades ago. The goal was to cut down on the speed gap between processing and memory before it got too wide.

Micron, one of the world’s biggest memory chipmakers, has built a processing-in-memory chip called Automata to find patterns in unstructured financial and biomedical data. Venray, founded by chip architect Russel Fish, is aiming to license similar technology, so that server chipmakers can stop moving to big expensive caches, where instructions and data are stored for quick access.

Upmem’s executives said that its chips had several advantages over previous attempts at processing in DRAM, which is naturally parallel like GPUs and FPGAs. Instead of squeezing more memory into computer processors, Upmem is “adding processing units to existing DRAM so that we don’t reinvent the wheel,” Roy said.

The company plans to provide tools so that central processors can be programmed to hand out workloads to thousands of DPU cores located in main memory – something that should make it easier to experiment with its chips. Roy also said that the its tools simplify programming, compiling, and debugging software written for the DPUs.

Upmem would not say when its chips would be released. The Grenoble, France-based company said that it needs memory chipmakers like SK Hynix, Samsung, Micron, and Western Digital to manufacture the DPU, but it declined to comment on specific partnerships.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!