Electronic Design

Multicore My Way

The trend today isn’t just putting multiple cores on one chip, but also connecting multicore chips to build large systems.

Multicore designs are all the rage, and the reasons why are easy to understand. Faster clock rates are shooting power consumption and heat dissipation through the roof. Major architectural improvements have been implemented, and larger caches simply take up space.

If all things were linearly related, tradeoffs would be more interesting. But in practice, using more processor cores running at a lower clock rate provides more throughput while consuming less power. Sun Microsystems' UltraSparc T1 chip layout also highlights the issue of chip real estate (see the figure).

The move to 90- and 45-nm technologies significantly raises the number of cores that can fit on a chip. Likewise, the area allocated to things other than the processor core has been growing significantly. Caches are expanding, and memory controllers have moved on-chip for many architectures.

Multiple-core architectures are quite varied (see the table). Typically, an architecture is optimized for its main target environment. For example, Azul Systems' Vega 2 targets the Java enterprise market where Java applications run in a J2EE (Java 2 Enterprise Edition) environment (see "Lots Of Java").

Instruction sets don't make much of a difference in multiple cores, since Intel and AMD share a common 32and 64-bit instruction set architecture (ISA)—well, almost, but the overlap is well over 95%. AMD uses HyperTransport links to connect chips together (see "HyperTransport: The Ties That Bind"), while Intel uses a central memory-controller architecture (see "Memory Front And Center").

Benchmarks often highlight architectural differences, but keep in mind the phrase "liars, damn liars, and chip vendors." A more important issue when looking at multicore chips will be threading in application software.

Putting multiple cores on-chip will benefit all but the system that runs one major application with one thread. Mileage may vary, but the benefits of a large number of cores are likely to remain greater for servers until application developers adjust to the plethora of threads.

It's interesting to note that only some multicore chips implement multithreading. The bottom line is that this class of chips should really be rated by the number of threads they can execute simultaneously. Also, parallel programming languages are a hot research topic—the hardware has finally caught up to their needs. But that's another story.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish