Parallel processing is everywhere, with almost as many software choices as hardware. Symmetrical-multiprocessing (SMP) designs and clustering dominate large-core, multicore solutions on PCs and servers. Graphics processing units (GPUs) have their own architecture optimized for data flow, while specialized multicore solutions abound.
In many cases, the hardware vendor may supply a de facto standard, but even this can change over time. Nvidia’s CUDA runs on the company’s GPUs, but it was turning into a more general multicore programming environment. Enter the Kronos Group’s OpenCL, which is likely to replace CUDA in the long run given its growing support and wider target audience, in addition to the hardware platform selection.
No one parallel-programming approach meets everyone’s needs, though it will be interesting to see how programming environments and languages incorporate this type of support. Environments such as National Instruments’ graphical LabVIEW and the Mathworks’ Matlab will support GPUs as well as conventional multicore supports.
Unfortunately, C remains the primary embedded programming language, so libraries like OpenMP and Intel’s Thread Building Blocks are likely to be the main gateway into the multicore realm. Moving to graphical dataflow languages like LabVIEW or Microsoft’s VPL (Visual Programming Language) is another possibility. LabVIEW is mature, while VPL is a relatively young platform.
Intel’s Larabee will add to the platform selections, which already include interesting architectures such as IBM’s Cell processor, found in the Sony PlayStation 3 (see “Games Flourish In A Parallel Universe” at www.electronicdesign.com, ED Online 15745). Though the Cell has been around for a while, developers still find ways to use it efficiently.
SOFTWARE KEY TO CONTINUUMS • New chips are always a challenge to developers, but at least software compatibility is making things easier in one area. A year or two ago, most microcontroller vendors started wrapping their chip lines into a software continuum that would typically span the 8-, 16-, and 32-bit realm.
This approach appears to have been successful, and it’s starting to wrap up from a coverage standpoint for vendors that started early (e.g., Freescale Semiconductor). The commonality was often peripheral and pin compatibility was complemented by software compatibility at the C interface level.
Switching between computing platforms is really as simple as a recompile and a chip swap. The improved user interfaces for platform development tools will be making this process even easier. Still, integrated development environments (IDEs) and runtimes are just the start. Look for operating systems and other middleware to be added to the mix.
DEBUG, DEBUG, DEBUG • The one chore that’s always “in development” is debugging. Yet when an improvement does come along, the payoff can be significant. Better tools can mean fewer bugs, faster delivery, and more reliable code.
Basic debugging has changed little over the last 20 years, leading to some interesting challenges as multicore and many-core debugging moves from specialized niches to the mainstream. Tools like the GNU debugger (GDB) need a major overhaul to address new programming and debugging techniques in addition to providing a mechanism for incremental third-party improvement.
Faster hosts make tools for code coverage and static analysis easier to accept since they will no longer slow down the development process, but developers will need to start using them more to gain an advantage. Better integration into IDEs helps. Education and falling prices will help, too. These tools need to move from specialized add-ons to tightly integrated components with more consistent support between products.
Tools for exposing an embedded system at runtime will continue to improve, but few will match environments like LabVIEW, which puts a graphical front end on applications by default (see the figure). It’s still surprising how many tool vendors don’t know about platforms like this or appreciate the improvements developers gain when printf is no longer the debugging hook of choice.
Decreasing the developer’s overhead caused by debugging never seems to gain the limelight like multicore support. The problem with limited debugging tools is that they leave less time for making applications safe and secure.
UNSAFE, INSECURE • Odds are good that you’re using C or a variant like C++ or C#. For most, these variants are used incrementally as a better C. Yet C will typically deliver code that’s not safe or secure due to general programming practices. Meanwhile, it’s unlikely that we will see mass migration to Ada or even a major movement to Java, but these kinds of environments will be needed to improve safety and security via bug reduction or elimination. Meanwhile, separation is the name of the game.
Embedded developers need to take note of problems like botnets on PCs connected to the Internet. Embedded device connectivity is now the norm and millions of embedded devices from cell phones to light switches will eventually become targets unless they become inherently more secure. Trusted platform modules (TPMs) are becoming more common and the starting point for high-security platforms.
High-security real-time operating systems (RTOSs) implement the Separation Kernel Protection Profile (SKPP). Last year, Green Hills Software’s Integrity- 178B with SKPP support received EAL6+ NIAP/NSA certification.
Contiune to page 2
These platforms are small because certification of large systems is impractical if not impossible. Certification is a formal process, but other developers will benefit since the same platform is usually the basis of commercial versions of the RTOS as well. This includes a range of platforms from Lynuxworks’ LynxOS-178 and LynxOS SE to DDC-I’s Deos.
The design process also needs to change, since security and safety must start to permeate the design process. It can’t be added later or by an external group. Isolation tools such as partitioned virtual-machine operating systems will help, though this only serves to prevent a problem from spreading.
VIRTAULLY SECURE • Separation is key to the virtual-machine support infiltrating the embedded space. Platforms such as Xen, KVM, VMware, and Microsoft’s Hyper-V are taking advantage of the virtual-machine hardware that’s prevalent on PC hardware. However, the more interesting software will be found in virtual-machine support on other processor architectures.
Embedded developers will look to this support primarily to address legacy code. But it also makes mixing an RTOS with an OS like Linux or Windows easier and more secure. The ability to partition for security will hopefully be a big reason, too.
SEEING SIMULATIONS • Simulation’s stock is rising, in concert with its expanding functionality. Chip simulation, system simulation, and even simulation of virtual worlds all fall into the mix.
Chip simulations are now a critical part of high-end multicore design and deployment. Software developers now can begin their programming chores well in advance of silicon.
The demand for improved simulation performance gets louder all the time. But simulation hosts are now multicore. More powerful hosts and improved simulation support will make chip simulation even more useful.