What’s The Difference Between Software Development Platforms?

March 14, 2012
Software development can involve virtual prototypes, RTL simulation, acceleration/emulation systems, and FPGA-based prototyping. This article breaks down when each is appropriate during the design cycle and explains their use cases.

Software development engines come in various forms, spanning virtual prototypes, register-transfer level (RTL) simulation, acceleration/emulation, and FPGA-based prototyping. This article will outline when each is appropriate during the design cycle and explain their differing use cases.

Table of Contents

  1. Background
  2. The Hardware/Software Stack
  3. Operating Systems
  4. The Project Flow
  5. The Project Timeline
  6. The Continuum Of Engines
  7. Designer Requirements
  8. When To Use Which Engine?

Background
Software development is all the rage these days. Do you have a great idea for the next must-have app? Millions of downloads of apps for phones and tablets on various operating systems (OSs) make software a hot market. As a result, tools for application and embedded software development are in high demand, and industries such as electronic design automation (EDA) are following all the attention software is getting with interest. In addition, EDA’s traditional main customer base, the semiconductor vendors, are trying to adapt to new challenges posed by their customers—the system houses—which eventually serve you and me, the consumers.

Over the last decade, a set of platforms that enable software development has emerged at the interface between EDA and embedded software development. Today, a design team has various options for how to develop software for the next cool gadget. But before we jump into the differences, let’s understand a bit more about what types of software can be found in today’s devices and what a typical development flow looks like.

The Hardware/Software Stack
Let’s look at a hardware/software stack for a consumer device like an Android phone (Fig. 1). At the lowest level of hardware, we find design and verification intellectual property (IP) assembled into sub-systems. In a phone, you will find various processors including application processors and video and audio engines, as well as baseband processors dealing with the wireless network.

1. Shown is an example of a hardware stack (at left) and software stack (at right) for a consumer device.

We must cover both control and signal processing, so designers typically turn to control processors such as ARM cores and digital signal processing (DSP) engines, which all must be programmed with software. On the hardware side, we integrate the subsystems into systems-on-a-chip (SoCs), which are in turn integrated into the actual printed-circuit board (PCB) and then into the phone.

The software stack looks just as complex as the hardware stack. At the highest level, end users are exposed to the applications that make our life easier, from Facebook to Netflix to productivity applications. These apps run on OSs, which may or may not be real-time-capable, and are enabled by complex middleware.

For example, the app that guided me to work this morning to avoid traffic did not have to implement location services by itself, but instead used pre-existing services for GPS. The same is true for most aspects of consumer products. In a phone, the middleware would include services like location, notification, activity, resource, and content management, all of which can be used by the apps we users see.

Operating Systems
Underneath the middleware we find the actual OS (iOS, Android, or others), including its services for networking, Web, audio, and video media. Combined with the OS, drivers (such as USB, keyboard, Wi-Fi, audio, and video drivers) connect the actual hardware to the device. The difference between the various layers lies in how and by whom they are provided.

In today’s world, many drivers are expected to come from the IP providers with the hardware IP they license. If the hardware IP provider does not make them available, or if the hardware is not licensed and developed in house, then a software team close to the hardware team will develop the drivers. The apps are developed by system houses and independent end-users. My iPhone, for example, has base applications developed by Apple plus applications developed independently.

OSs and middleware are typically licensed distributions. Not too long ago the commercial market for mobile OSs was quite large—remember Symbian and Microsoft CE? Embedded Linux and Android have changed the landscape so OSs are still licensed. But with the exception of Windows Mobile 7, the commercial mobile OSs are extinct, having been replaced by in-system-house OSs such as iOS, Blackberry OS, or various distributions of Android. The systems houses then customize these OSs, as with HTC’s Android phones with Sense technology.

The Project Flow
Now that we’ve looked at the different types of software to be developed, let’s analyze a project flow representative of consumer products. We may compare the various hardware/software development efforts during a project and the availability of different software development engines (Fig. 2).

2. Shown is a comparison of the various hardware/software development efforts during a project and the availability of different software development engines. (courtesy of IBS and Cadence)

The vertical axis shows the relative effort in human weeks, normalized to the average total of the projects analyzed. The horizontal axis shows the elapsed time per task and is normalized to the average time it takes to get from RTL to tapeout, which was 62 weeks across the projects analyzed. (For more information on the data, see “Improving Software Development and Verification Productivity Using Intellectual Property (IP) Based System Prototyping.”)

For instance, the total software development effort is about 45% of the project effort (measured in human weeks), as shown on the vertical axis. Verifying the RTL takes 20% of the overall hardware/software project effort and takes about 55% of the elapsed time from RTL to tapeout as indicated on the horizontal axis.

We may also examine the points at which the different software development engines become available (Fig. 2, at bottom). These engines are defined as follows:

  • Virtual prototypes are transaction-level representations of the hardware, able to execute the same code that will be loaded on the actual hardware, and often executing at well above 20 MIPS on x86-based hosts running Windows or Linux. To the software developer, virtual prototypes look just like the hardware because the registers are represented correctly, while functionality is accurate but abstracted. (For example, processor pipelines and bus arbitrations are not represented with full accuracy.)
  • RTL simulation executes the same hardware representation that is later fed into logic synthesis and implementation. This is the main vehicle for hardware verification and it executes in the Hertz range, but it is fully accurate as the RTL becomes the golden model for implementation, allowing detailed debug.
  • Acceleration/emulation executes parts of the design using specialized hardware—verification computing platforms—into which the RTL is mapped automatically and for which the hardware debug is as capable as in RTL simulation. Interfaces to the outside world (Ethernet, USB, and so on) can be made using rate adapters. In acceleration, part of the design (often the testbench) resides in the host while the verification computing platform executes the design under development. As indicated by the name, the primary use case is acceleration of simulation. In-circuit emulation takes the full design including testbenches and maps it into the verification computing platform, allowing much higher speed and enabling hardware/software co-development, ranging all the way up into the megahertz range.
  • FPGA-based prototyping uses an array of FPGAs into which the design is mapped directly. Due to the need to partition the design, re-map it to a different implementation technology, and re-verify that the result is still exactly what the incoming RTL represented, the bring-up of an FPGA-based prototype can be cumbersome and take months (as opposed to hours or minutes in emulation) and debug is mostly an offline process. In exchange, speeds will go into the tens of megahertz range, making software development a realistic use case.

Virtual prototypes that use transaction-level models (TLMs) can be available earliest in the design flow, especially for new designs (Fig. 2, again). Compared to RTL development and verification, development of blocks at the TLM level takes less effort because less detail is represented, resulting in less accurate models. Designs at RTL become executable in simulation first. Once RTL is reasonably stable, subsystems or full SoCs can be brought up in emulation or acceleration engines. Very stable RTL can be brought up in FPGA-based prototyping as well. Finally, once the first silicon samples are available, prototype boards may become available for software development.

The Project Timeline
Now we will assume a complex project spanning 15 months from RTL to tapeout and requiring another three months for silicon availability. Virtual prototypes can be available from 12 to 15 months before there is silicon. The elapsed time for development and verification of RTL would be about 10 months, so emulation and acceleration can become available from six to nine months before silicon, and FPGA prototypes can become available from three to six months before silicon. This clearly illustrates that the time of availability of the various software development engines is very different over the course of the design cycle.

The Continuum Of Engines
However, time of availability is not the only issue. As we dig deeper into the design team’s requirements, it becomes clear that there is no “one-size-fits-all” solution. A continuum of engines is needed. Let’s look at the requirements in more detail.

For application software development, designers care most about speed, debug, and early access. When development must become more hardware-aware, hardware accuracy becomes more important. For new blocks and IP validation, accuracy of hardware as well as hardware debug and turnaround time is crucial. Later, during hardware development, subsystems typically contain software and therefore need software debug capabilities. Finally, hardware/software validation requires enough speed to run a sufficient number of tests to verify functionality.

The bottom line: No engine covers all of these requirements and addresses all the care-abouts. Hence, software development platforms must operate in a connected flow to provide a smooth path from concept to product.

Designer Requirements
Let’s look at a graphical summary of the different user requirements in a flow from concept to product (Fig. 3). Today, we still develop most software on actual hardware once it is available. Real hardware executes at the intended speed, which is great, but debug is not so great because it is hard to look into the hardware. Most importantly, this type of software development becomes available at the latest possible point in time when changes in the hardware require a redesign.

3. This diagram summarizes the different user requirements in a flow from concept to product.

In contrast, at the early end of the design spectrum, virtual prototyping allows software development at the earliest point. Software debugging is easier and faster, but hardware accuracy and detailed debugging are impractical, making hardware verification impossible.

Once RTL is available, turnaround time and hardware debugging in RTL simulation is quick and accurate, given that RTL is today’s golden reference for functionality. But lack of speed makes software development very impractical for most of the types of software in RTL simulation.

As a result, for hardware, software verification engineers want to use emulation and acceleration platforms. These tools provide more speed, definitely enough for subsystems. In addition, the software debug now becomes more practical even though it’s later in the flow than RTL simulation. The speed combines with high hardware accuracy and good turnaround times, meaning it takes less time to get a new design into emulation and acceleration engines.

In addressing the need for hardware/software validation, even more speed is required to run more software for a longer time. This is where FPGA-based prototyping platforms prove useful, enabling even more speed and making software development and validation even more productive. However, debug and turnaround time are not as short as they are with emulators.

When To Use Which Engine?
We have only scratched the surface of the differences between software development platforms and have assumed the user is involved in software development. With perhaps the exception of virtual prototypes, which are mostly focused on software developers, none of the engines mentioned here is exclusive to software development.

RTL simulation’s primary intent is hardware verification, and extending its use to hardware/software co-verification is a more limited use model due to the inherent speed limitations. Emulation is used for in-circuit execution, power analysis, hardware/software verification, and connections to the real world. Finally, FPGA-based prototyping serves for software development, regression verification, and in-system validation.

There are fundamental differences between the different execution vehicles with respect to time of availability during a project, software debug, execution speed, hardware accuracy, and debug and turnaround time to bring up a design. In an ideal world, design teams would like to see a continuum of connected engines, allowing them to exploit the given advantages of each engine.

Going forward, hybrid use models will become more and more important. For example, when a part of the design is stable (as in cases of IP reuse), then a hybrid of emulation and FPGA prototyping has the advantage of keeping the still-changing parts in the emulator while the fixed pieces of the design are in the FPGA prototype, executing at a higher speed.

Similarly, a combination of emulation and virtual prototyping allows the optimized execution of inherently parallel hardware like a graphics accelerator or a video engine in the emulator, while software executes on fast just-in-time compiled processor models on a host processor, running faster than would be possible in an emulator.

About the Author

Frank Schirrmeister

Frank Schirrmeister is Senior Director at Cadence Design Systems in San Jose, responsible for product management of the Cadence System Development Suite, accelerating system integration, validation, and bring-up with a set of four connected platforms for concurrent HW/SW design and verification.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!