Multicore is the wave of the future, with more cores in our designs as time goes on. Some push the limits, while others creep up to a half-dozen very large cores. And then there’s the problem of programming these things.
Communication lies at the heart of the matter. Memory is the key in a symmetrical multiprocessing (SMP) or non-uniform memory access (NUMA) environment. In other architectures, communication may be packet switches, FIFOs, or a host of other link technologies employed in different chips.
Hardware programmers have placed a number of standard protocols such as sockets, streams, Linx, and TCP/IP on top of this connection (see “Freeing Communications” at www.electronicdesign.com, ED Online 17317). If you really want to get complicated, check out the Open Management Group’s (OMG’s) CORBA or data distribution service (DDS).
NEW SPEC ON THE BLOCK
Now, the Multicore Association has weighed in with its lightweight version designed for multicore environments—the Multicore Communications API (MCAPI) Specification (see “Multicore Communications API,”). The V1.063 incarnation of the standard is available online.
MCAPI tries to minimize the footprint, overhead, and programming design requirements. Although the spec is about 120 pages, most of that material comes from the details, not from a large or complex application programming interface (API). It maps well to most hardware too.
Also, it can be implemented in most real-time operating systems (RTOSs) without modifying the operating system, though better, more efficient performance will likely come from closer integration. The bigger question will be whether this becomes a standard option with popular operating systems. Will it make it into the Linux kernel? Who knows?
The basic concepts include a node, a port, and a message. The architecture uses a zero copy, pass-by-reference option that matches most shared memory architecture functionality, but this is an implementation detail. This is the main advantage of this approach, and it’s not lost on RTOS designers of message-based operating systems like QNX and Enea’s OSE, which uses Linx.
Nodes map nicely to cores or virtual cores. Ports send or receive messages, but the MCAPI implementation differs from most of the higher-level standards since links between ports are unidirectional and map one-to-one. Communication is assumed to be reliable, which is typical when dealing with memory or on-chip switching or buses.
Error handling hasn’t been overlooked, though. Likewise, the standard addresses functions such as link management and backpressure support. On the other hand, details like data encoding and big versus little endian are left to the programmer. This is not a big issue for embedded programmers where the components are normally known and consistent, but it can be a challenge in a more open environment.
IN THE REAL WORLD
Higher-level protocols can be built on MCAPI. Yet the spec isn’t designed to be an over-reaching solution. Instead, it specifically targets multiple cores. Still, communication between cores is likely to span the backplane in many instances.
Skipping across a network isn’t impossible, though the assumption of fast and reliable communications often requires movement to higher-level protocols to handle communication reliability. There is future work being done in the organization and more specifications in the wings, but they will likely remain as lean as MCAPI.
Open-source versions of MCAPI are available for Linux. Commercial implementations are available from companies like PolyCore Software, whose Poly-Messenger follows the MCAPI standard. Even Freescale has a version for its QorIQbased multicore processor line. Companies like PolyCore Software take the task a bit further by providing configuration tools often needed for new chips.
MCAPI will be a good move for most embedded developers where C/C++ is king and performance and simplicity are watchwords. So, are things are getting better or worse?