Server virtualization
Embedded virtualization
Delayed data
I/O access
Memory mapping
Embedded-system designers
The virtualization of computing environments is not a new concept. Back in the 1960s, IBM experimented with virtualization techniques called paravirtualization, and by the 1970s, the company had some implementations working on its mainframes. By definition (borrowed from Wikipedia), computer paravirtualization is a virtualization technique that presents a software interface to a virtual machine similar but not identical to that of the underlying hardware.
Recently, Intel released processors with VPro technology incorporating VT-x and VT-d extensions. These extensions replace some of the paravirtualization techniques with hardware, making it easier to implement processor-architecture virtualization. When using these techniques, different ways of implementing virtualization have emerged to address specific application needs. Today these can be broken into two classes: “server virtualization” and “embedded virtualization.”
Server and Embedded Virtualization
Server virtualization has been driven by the need to reduce the explosion of servers required to process the ever-increasing amount of data and applications. Usually, it means leveraging the additional processing power that can be stuffed into a 1U or 2U server module.
The ultimate goal is to consolidate applications onto one server. Traditionally, this requires porting more applications onto a server, which isn’t easy because applications generally run under different environments—e.g., different versions of operating systems (OSs), middleware, etc.
Porting them onto one OS means they must be modified and then revalidated and/or certified before deployment. With server virtualization, multiple environments like OSs, middleware, and application packages can be loaded as-is on the same server.
In contrast, embedded virtualization is driven by the need to combine a real-time operating system (RTOS) running a control application alongside a general-purpose operating system (GPOS) running an advanced human-machine interface (HMI) and/or data processing software. Embedded virtualization, like server virtualization, reduces costs. However, rather than eliminate the need for extra server computers, embedded virtualization can eradicate the cost of separate real-time computer modules.
Embedded virtualization enables the two OSs to run on the same platform without compromising the real-time deterministic requirements of the control application. In addition to its ability to consolidate two-box systems into one, it opens the door to the improvement of control applications: An advanced HMI can be added without requiring the OEM to use an expensive two-box solution.
The Need For Different VMMs
On the surface, server- and embedded-virtualization implementations are essentially the same (Figures 1 and 2). Both implementations need some form of virtual-machine manager (VMM), also called Hypervisor in certain circles (e.g., IBM), to load the individual OSs/applications and manage the memory, interrupts, I/Os, and other system elements.
Let’s refer to a general-purpose server application VMM as “VMM(G)” and one for a real-time embedded system version as “VMM(R).” A VMM(G) supporting virtualization for server applications is typically designed as an application running on a GPOS, such as Windows or Linux. Alternately, a VMM(R) supporting real-time control applications needs to be built on an RTOS.
Continue on next page
The difference between the two VMM approaches lies in their scheduler architecture and, hence, the prioritization of tasks and events. GPOS schedulers tend to prioritize tasks on a first-come, first-served basis, while RTOS schedulers are event-prioritized.
RTOS schedulers handle new interrupts immediately (unless they are masked), regardless of what other tasks are running. Therefore, when a sensor or another data-capturing device generates an interrupt, it’s always serviced at the speed that the CPU can service the interrupt, resulting in predictable and deterministic response time.
I/Os And traffic
Ethernet controls most of the I/O traffic coming in and out of a server. This connection to the outside world is very flexible. It can be rerouted easily, but most importantly, packets of data coming from different inputs needn’t be read by the I/O at a specific time. The application generally sorts out the incoming data and starts processing the information only when all of the data from a specific source is present.
A packet of particular information doesn’t have to arrive at a critical moment. If a data element isn’t there when required by the application, the application will merely wait for it before completing the transaction.
Due to the nature of server traffic and the fact that it typically interacts with the Internet cloud, it doesn’t matter if data is delayed going in or out of each virtualized server (Fig. 3). Note: Protocols such as Profinet and EtherCat, as well as control buses, allow data to be transferred deterministically across an Ethernet network. These are, however, not compatible with TCP/IP traffic that’s used in server applications.
In control applications, time-correlated data is essential to make decisions. Robots and machines that are monitored by several inputs, whether encoders, visual data, or something else, need to have valid control information arrive at the control system at a particular time. Reading sensors at random times will often produce unpredictable behaviors if not catastrophic results.
Part of that problem is resolved by making sure the VMM(R) reads the I/Os in real-time, as discussed earlier. However, control-system I/O devices, such as frame grabbers, are often of the non-commercial variety, and only RTOS drivers are available. In this case, the RTOS should handle the I/O directly, bypassing the VMM(R). In other words, the RTOS is allowed to “punch through” the VMM(R) (Fig. 4).
On another front, multicore processors have opened the possibility of allocating a core to a specific guest OS. This is a very powerful feature because it essentially segregates the operation of each guest OS, ensuring the integrity of each environment and guaranteeing compute capacity for each OS. This capability needs to be managed by the VMM. Otherwise, guest OSs will typically take whatever resources they can.
Paravirtualization And HW-Assisted Virtualization
Paravirtualization techniques can be used in several ways to achieve both server and embedded virtualization by mapping memory addresses, I/O addresses, and interrupt vectors. Occasionally, they’re also instituted to achieve better performance. In all cases, paravirtualization dictates some code modification, and its implementation is usually specific to the environment (OS, platform, and application).
To run two OSs on the same platform, one or both of the OSs and its associated application(s) are relocated in memory. Take, for example, when a platform combines the HMI (GPOS) and control system (RTOS/guest OS) (Fig. 5). The VMM(R) loads the guest OS and associated application(s) at system startup. From that point forward, any memory address generated by the guest OS (RTOS) and the applications running on it must be adjusted by the memory load offset.
Continue on next page
These tasks are accomplished via paravirtualization techniques, which edit memory tables and other memory vectors within the RTOS. Yet this process can be difficult because it affects many parts of the OS. It also requires a substantial amount of verification effort to ensure that it works properly. For this reason, the technique supports only a few OSs.
Hardware-assisted virtualization features, such as VT-x, alleviate that complexity by providing hardware address translators built into the chipset. These ensure that any memory address issued by the guest OS is automatically adjusted by the load-offset. Likewise, the VT-d feature automatically adjusts the memory accesses generated by I/O devices according to the load-offset of the guest. (Some processors support VT-x, but not VT-d. In those cases, paravirtualization techniques must still be used.)
Paravirtualization is also essential in the rerouting of interrupts serviced by the guest OS. These need to be altered by the VMM at load time according to rules set up by the user prior to load time.
Also, paravirtualization techniques can be used to optimize performance at the expense of making interfaces application-specific. For instance, to improve performance, application code can bypass an I/O driver in the guest OS along with the VMM and write directly to the real I/O.
Such changes customize the guest OS and VMM and prevent them from being able to run in other system configurations without additional modifications. One way to avoid the expense and risk of maintaining customized OS and VMM software environments is to employ virtualization software that runs standard OS and application software without modification.
Product Possibilities
Several products are available to help in the virtualization process. Developed by TenAsys, the INtime for Windows RTOS employs a number of paravirtualization techniques to optimize its interface with Windows to maximize performance. All of the paravirtualization is done on the INtime RTOS kernel side, so the customer can run an off-the-shelf copy of Windows, with all of the latest devices and associated drivers supported by Windows. The RTOS can also run multiple instances of itself alongside one instance of Windows on a multicore processor.
Another virtualization-enhanced software product from TenAsys is eVM for Windows. Designed to run on an Intel multithread/multicore processor with the VT-x feature, eVM enables PC-based RTOSs to run alongside Windows without requiring any paravirtualization to Windows or the RTOS.
A thread/core is dedicated to each OS, and the processor’s VT-x feature maps the memory. Without this feature, the VMM would be required to dynamically paravirtualize any guest RTOS that would be loaded. Supporting them all would be a quasi-impossible task, considering the many available versions of PC-based RTOSs.
In addition, eVM takes advantage of the processor’s VT-d extensions when the guest RTOS needs to interface directly to I/O devices. It also works with an off-the-shelf copy of Windows that runs directly on the platform (bare-metal) so all of the latest devices and drivers are supported.
Conclusion
Paravirualization techniques can improve performance and consolidate platforms by combining operating environments. However, embedded-system designers need to understand the possible pitfalls of paravirtualized systems (Fig. 6). If their multi-OS solution employs non-standard modifications or tweaks to the OSs or the VMM, they may be in for support headaches when an OS or device driver needs to change to accommodate a bug fix or feature upgrade.
If your application involves real-time computing, an embedded virtualization solution that’s simpler and less costly to develop and maintain is the obvious answer. It’s also beneficial if it can run standard and legacy OS software without modification.