Electronic Design

Avoid Wait States In Cache Operations

Cache misses can be costly in terms of processor wait states if main memory must be updated before the line is filled. A more efficient algorithm would be to store the miss address and date, then fill and update the memory later.

In this circuit, every memory access is checked by the cache controller to see if the data for the memory address is available in the cache (Fig. 1). If the least-significant bits of the address match the tag field from the cache, the memory request is filled by the cache in a single cycle and the controller takes no action. If there's no match (i.e., a cache miss), the controller asserts a signal to a memory controller to request a memory-access cycle. The cache must be updated in one of two ways, as determined by the update flag field in the cache data (Fig. 2). The bit is automatically set for a cache location when the data in the location is changed (written to) by the processor.

If the bit hasn't been set, the location may be overwritten by the lower bits of the address and the data from the main memory. The new entry is made in parallel by fulfilling the original memory request. If the bit is set, both the old address and data are retained in registers so the memory request may be filled without delay. The cache controller then requests a memory cycle to update the main memory with the contents for the stored address.

The register used to store the cache-miss addresses may also be used to trap any processor address for examination in case of a parity or bus error. As shown in Figure 1a, the trapped address can be read by simply multiplexing the register on to the system data bus.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish