Modeling Next-Generation SRAMs

May 27, 2002
In the past, modeling SRAM memory components for functional verification didn't present a great challenge. Simple interfaces and relatively small address spaces made it practical for designers to code the models in HDL. However, the growing diversity...

In the past, modeling SRAM memory components for functional verification didn't present a great challenge. Simple interfaces and relatively small address spaces made it practical for designers to code the models in HDL. However, the growing diversity and complexity of high-speed SRAMs, and even the new applications of these devices, make a compelling case for advanced modeling architectures or commercial solutions.

It helps to first understand why models are needed. In most cases, SRAM models are used to simulate and verify SRAM controller logic, and then for system-level regression simulations. It also is increasingly important to implement the models for system-level performance analysis early in the design phase, and to analyze and select actual memory components or memory architectures for the system design.

Moreover, the functionality requirements for the models vary with the task at hand. Designers probably only want a cycle-accurate model for performance analysis, but certainly want the model to report timing and protocol errors for design and regression testing. Previously, verification engineers would typically employ a robust model for interface design and verification, then replace that model with a simple, lightweight version for regression simulations.

The idea was that shedding timing and protocol checks, and any potential PLI/OMI bottlenecks, would speed up simulation. Today, however, verification engineers realize the value of employing SRAM models as a key element in the system verification environment. The trick is to understand that all important system data passes through these SRAM models. What better place is there to verify system-level data transactions and transformations?

In order to extract the real verification value from the SRAM models, you must consider some advanced modeling functions. Back-door access lets you get data in and out of the model without going through the pins and wasting simulation cycles. It also is ideal to have a system for "knitting together" various physical memory representations into the logical memory space used by the system.

Not only is it valuable to load, save, and compare these memory images outside of the simulator during simulation, it also increases the value of "call-backs," another important model function. Call-backs enable the SRAM models to alert the testbench of specific memory activity. This makes it possible to identify erroneous data transactions during simulation as they occur.

For example, in a header data transfer, an intended 8-byte memory transaction might erroneously initiate a 10-byte transaction that could overwrite other valid header data in the memory. Although the timing and protocol of the transactions may be error free, without a call-back, the data error wouldn't be detected until the damaged header data was read back later, if ever. If erroneous data is detected, there's still the burden of identifying when data corruption occurred, which could have been hundreds or thousands of cycles earlier.

Even these simple examples clearly show that a robust SRAM modeling solution is critical for functional verification. Component complexity, vendor diversity, and a high verification value have combined to drive SRAM modeling into the same paradigm as DRAM modeling.

DRAM vendors have realized that standard Verilog models are no longer sufficient for real verification work. Most have given up trying to develop models in-house in favor of partnering with commercial verification IP vendors. The same is now true with SRAM vendors. Increasing device complexity, new verification functionality requirements, and the myriad of EDA tools to support them are making it impractical for vendors to develop and handle good-quality SRAM models.

It's fair to say that nobody wants to spend engineering resources on this task. You must first collect all of the datasheets for these devices and each memory type, understand the differences and nuances between the various vendor components, then create an accurate model of the devices. It takes an experienced engineer to understand and manage every degree of freedom. One could argue that it would be a crime to waste a valuable resource on this task, especially when adding the work it takes to thoroughly check and report all timing and protocol violations, not to mention the verification features and testbench interfaces.

Commercial modeling solutions, like those currently used for DRAM and flash simulation, are becoming the most practical way to model next-generation SRAMs. The most popular of these commercial verification IP products use "C" to model the memories, which then communicate with the simulator through the PLI or OMI in any simulation environment (Verilog, VHDL, C). In addition to quality, this solution provides the most robust verification features via a simple C interface, which exposes memory transactions and memory data during simulation. Built-in features can also include error injection features to test ECC, special assertions with callbacks, and simple methods for creating logical memory spaces from component models.


To join the conversation, and become an exclusive member of Electronic Design, create an account today!