Parallel Programming Language Brings Software Closer To Hardware

Feb. 28, 2008
Programming languages are evolving to bring the software closer to hardware. As hardware architectures become more parallel (with the advent of multicore processors and FPGAs, for example), sequential programming languages are forced to deal with repre

Programming languages are evolving to bring the software closer to hardware. As hardware architectures become more parallel (with the advent of multicore processors and FPGAs, for example), sequential programming languages are forced to deal with representing parallelism, which isn’t always an elegant or intuitive solution.

A research paper on the landscape of parallel computing published at the University of California, Berkeley, validated this concept and noted that “Since real-world applications and hardware architectures are inherently parallel, so too should be the software programming model.” 1

LabVIEW isn’t sequential. Instead, this graphical programming language based on structured dataflow handles parallelism natively. LabVIEW code is written in block diagram form, as you would describe a problem on paper, and LabVIEW contains all the programming constructs and characteristics of a text-based language such as C—with the important exception that the code can be as easily written in parallel fashion as it can be written sequentially.

LabVIEW’s dataflow nature means that any time code has a parallel sequence on the block diagram, the separate code paths will try to execute at the same time (Fig. 1). This makes LabVIEW a favorable tool for the parallel programming of multicore processors and FPGAs.

Application Development for Multicore Processors When a LabVIEW application runs on a processor (single CPU, multiprocessor, or multicore), parallelism is achieved under the hood through multithreading. It’s important to note that the concept of threading is abstracted from the developer, as LabVIEW handles thread creation and synchronization automatically.

Developers using a language such as C must use explicit threading to implement parallelism. Creating threads isn’t difficult, but managing these threads and optimizing for performance can be a challenge. In C, you must manage synchronization through locks, mutexes, atomic actions, and other advanced programming techniques.

When multiple threads become hard to follow, common programming pitfalls can arise. These include inefficiencies due to too many threads; deadlock due to threads becoming stuck waiting and unable to proceed processing; race conditions due to the timing of code execution not correctly managed so that data is either not available when it needs to be or the correct data has been overwritten; and memory contention due to multiple threads that try to access memory at the same time.

Due to these issues, developers must emphasize best practices for debugging to ensure code correctness on multicore hardware. LabVIEW prevents common errors in multicore programming from occurring by enforcing the principles of dataflow. Dataflow encourages certain synchronization behavior that leads developers to prevent race conditions just by safely connecting blocks. It’s important to note that there are exceptions, and if rules of dataflow are broken, race conditions can still occur.

LabVIEW provides functional debugging and trace-level debugging. Functional debugging in LabVIEW can be applied by visually inspecting code as it executes in parallel (called highlight execution) or by applying probes that show datapoints at certain moments in time. The ability to “see” parallel code is a key advantage that parallel languages have over sequential counterparts, as the parallelism is self-documenting in that regard (Fig. 2).

When developing code on real-time symmetric multiprocessing (SMP) systems with LabVIEW, trace level debugging is available with the Real-Time Execution Trace Toolkit. Thread activity on different CPUs can be viewed simultaneously to detect issues such as memory allocation, priority inheritance, or thread swapping (Fig. 3).

Application Development For FPGAs When embedded designers use LabVIEW to target FPGAs, the language automatically generates an intermediate representation of the block diagram, which is compiled to a bitfile using industry-standard tools for synthesis, optimization, and place and route. The parallel LabVIEW code from the block diagram is mirrored in the FPGA logic and runs on dedicated silicon. In this scenario, just as in programming LabVIEW for multicore processors with multithreading, the implementation details are abstracted from the developer, and LabVIEW handles them at compile time.

When targeting an FPGA with LabVIEW, each application process may be implemented within a loop structure. Consider Figure 4, which partitions an application into three tasks: data acquisition, processing, and communication for data transfer to a host application. These tasks could be implemented as a sequence in a single loop, but they could also be coded as three separate loops.

Continue on Page 2

One loop handles the data acquisition and timing of the acquisition, and it passes data off to processing. The second loop receives data from the first loop, processes it, and passes it off to the third loop, which handles the transfer of processed data to the host application.

The LabVIEW diagram is mapped to the FPGA gates and slices so parallel loops in the block diagram are implemented on different sections of the FPGA fabric. This allows all processes to run simultaneously (in parallel). The timing of each process is independent of the rest of the diagram, which eliminates jitter. Also, you can add additional loops without affecting the performance of previously implemented processes. You can add operations that enable interaction between loops for synchronization or exchanging data as well.

Conclusion Hardware architectures continue to become more parallel, and this shift is affecting the software design approach. Traditionally, sequential languages have been the norm, though different languages with different programming constructs (i.e., C versus VHDL) have been required to take advantage of multicore processors and FPGAs, respectively.

In contrast, parallel programming with LabVIEW provides a unified graphical system design approach to program both multicore processors and FPGAs. As a parallel programming language and development tool, LabVIEW meets key requirements that embedded developers require to utilize parallel hardware: intuitive representation of parallel code with graphical programming, built-in synchronization and communication mechanisms, and integrated debugging capabilities.

While there is no silver bullet for programming complex embedded systems that rely on parallel silicon, developers now can look to programming languages and tools that help bring software closer to the hardware. For more on multicore programming with LabVIEW, check out this webcast at http://zone.ni.com/wv/app/doc/p/id/wv-359?metc=mtkwks.

References

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!