Platforms Strive For Virtual Security

Aug. 4, 2005
Running multiple operating systems on a single platform continues to be a mainstay in server environments. But now this ability is migrating to embedded environments for various reasons, ranging from legacy operating-system (OS) support to enhanced se

Running multiple operating systems on a single platform continues to be a mainstay in server environments. But now this ability is migrating to embedded environments for various reasons, ranging from legacy operating-system (OS) support to enhanced security. More powerful processing environments are even driving OS coexistence and virtualization into portable, handheld devices.

Most developers are familiar with the native single-image OS architecture (Fig. 1). It's used on desktop PCs and embedded devices. Add a memory management unit (MMU), and the OS can isolate applications so no one will accidentally or maliciously stomp all over their kin or the OS.

The single-image OS serves most applications well. Yet many applications now demand more flexibility. This can be gained through a number of techniques, depending on application requirements and system support. For example, it's becoming more common to mix real-time applications with applications that don't require the same level of real-time support.

One approach delivers this mix in a single OS like FSM Labs' RTLinux. The OS is built on a real-time portion, and the Linux OS sits on top of it. Real-time applications operate in one environment, and Linux operates in the other. Communication exists between the two environments, though applications never migrate from one to the other without being rewritten.

The other method retains the OS to handle existing applications and mixes OSs to provide a more flexible environment. There are basically three approaches. One is applicable to environments where all applications run on the same type of virtual machines that run under the same operating system (see "Software Virtual Machines," p. 46). The other two approaches, para-virtualization and full virtualization, virtualize the OS run-time environment to support one or more guest OSs.

Para-virtualization requires guest OSs specifically targeted for the host environment (Fig. 2). Advantages include low overhead for guest OSs, which in turn increases system performance and utilization. On the flip side, a guest OS must be modified. It's one reason why open-source OSs are often the first targets for para-virtualization systems.

Full virtualization requires hardware support in the host processor over and above MMU support (Fig. 3). This typically includes trapping privilege instructions whereby the underlying system can efficiently emulate the instructions for the interrupted application.

Virtualization offers a number of benefits. Of course there is the obvious benefit of running multiple OSs on a single platform. It also is possible to implement quality of service (QoS) at the OS level. The same is true for security. It's possible to share or simulate peripherals that the guest OSs have access to. Finally, a virtualized system is great for management and debugging since a virtual-machine image can be arbitrarily stopped, saved, and restarted. In fact, quick guest OS startup can be accomplished by loading an image that has already reached the desired starting point.

Virtual systems can run on top of a conventional OS. Typically, though, they're built on a more limited hypervisor, which is a virtual-machine "manager." A hypervisor provides many of the services a conventional virtual-memory OS would, such as coordinating memory and I/O protection.

However, a hypervisor wouldn't offer services like generalized messaging (e.g., BSD sockets and file management). Instead, the hypervisor controls and coordinates hardware services and protection mechanisms that are then managed by the guest OSs for their respective applications.

Hypervisors are designed to be very small, very fast, and very efficient. Xen is only 50k lines of code. Anything less, and the overhead would make the virtualization process impractical. It's possible to incorporate the hypervisor in a host OS, but it tends to be more efficient to run a host OS as a guest OS.

So why consider all of this sophisticated design and support hardware? Surprisingly, the reasons for using virtualization on a server are often the same reasons why it winds up in an embedded application.

Load balancing is key for using virtualization on multiprocessor servers. On the embedded side, multiple processor systems are becoming more common, and load balancing is often a requirement. Load balancing often leads to application migration, which is another use of virtualized systems.

Another reason is to mix different versions of the same OS. This can be used to support legacy applications as well as to test new environments while maintaining existing services. It's a key advantage in high-availability systems, too.

Of course, mixing different OSs is another possibility. In embedded environments, one OS may be a real-time version, allowing combinations like RTLinux but without the need for tight application integration. Such a mixed-mode environment may emerge for a variety of reasons.

For instance, combining two processors into one can reduce the hardware cost and the system footprint. With a virtualized system, the OSs from the two processors, which are typically different, can run on the single processor.

Security presents another situation where a hypervisor environment can improve system capability (see "Virtual Security," p. 50). In this case, the hypervisor is the most secure part of the system. It can give guest OSs different levels of security and even control the types of connections available between guest OSs and their applications. System managers then have simpler, more flexible control over security policies. It also can make the job of proving system security easier since low-security environments can be isolated from high-security environments.

Regardless of the reason, virtual systems are going to play a more important part in processing and a very specific role in embedded applications. If all this sounds familiar, you may have experience with mainframes. Virtualization has been around for some time, starting with systems like IBM's VM/370 and the Burroughs B1700 processor.

SINGLE OS IMAGE The single-image OS is so ubiquitous, it almost doesn't need to be mentioned. But it's interesting to note the wide variety of similar implementations. Boil down most real-time OS (RTOS) implementations to the bare architecture, and they become almost identical. On the other hand, unique features such as security, communications, and development tools continue to differentiate vendors and their solutions.

In virtualization, the ARINC 653-1 standard stands out like the POSIX standard does for general OS services. ARINC 653 defines an APplication EXecutive (APEX) for space and time partitioning. It's usually found in DO-178B environments, and it has found support in standard OSs like Green Hills Software's Integrity and LynuxWorks' LynxOS-178.

With ARINC 653, developers can define resource partitions for applications. A partition can be limited in a variety of ways, such as the length of the time slot it has to execute and the amount of accessible memory. Of course, this is all done while keeping applications isolated. This makes it easier to control and confirm Evaluation Assurance Level (EAL) security within a system.

PARA-VIRTUALIZATION Architecture: OS partitions Hardware: normally uses MMU and IOMU (I/O management unit) Performance: close to native OS Pros: low overhead, easier virtualization of device drivers Cons: guest OSs must be modified Examples: Xen, UML PARA-VIRTUALIZATION This approach to virtualization requires cooperation between the underlying hypervisor and the guest OSs. Para-virtualization still needs memory protection and the ability to trap privileged instructions, but it doesn't require the same level of support as full virtualization.

In theory, para-virtualization can run without protection and the resultant security loss. It might be useful in embedded applications, where the mixing of OSs is necessary yet developers have complete control over the applications.

Para-virtualization is showing up in a wide range of products, from the open-source Xen to Jaluna's OSWare. OSWare runs on a host of processors and even placed Linux atop Texas Instruments' C64x DSPs along with real-time OSs like Wind River's VxWorks. Green Hills' Padded Cell technology, employed in the company's Integrity RTOS, is based on para-virtualization technology and Integrity's ARINC 653 support. In this case, the Integrity is the hypervisor.

Xen initially targeted x86 processors and Linux. Yet this generic framework appears to be generating a good bit of interest. On the x86, Xen runs in ring 0 (most privileged). Guest OSs run on rings 1 and 2, leaving ring 3 for user space.

Like most para-virtualization systems, Xen virtualizes devices in device channels. Guest OSs use stub device drivers that connect to these channels. The real device drivers reside in another guest partition. It's possible to have drivers that go directly to the hardware, but systems are more flexible and secure if all device support runs through the hypervisor.

This also gives a system manager more flexibility, since the real devices associated with an OS are under the manager's control. Although Xen provides its own device-driver environment, it doesn't need a whole new set of device drivers. That's because it uses Linux device drivers.

The device interface is part of the Xen Hypercall application programming interface (API). This API is used for guest OS customization. Simon Crosby, vice president of Strategy and Corporate Development of Xen Source, indicates that many developers employ the API as the target for a range of services.

The InfiniBand Trade Association's OpenIB stack supports Xen. In fact, the archxen Linux implementation will use the API. This makes Xen a regular Linux target, just like x86 and PowerPC hardware architectures.

Para-virtualization systems can take on a number of different guises depending on how the system is implemented and the underlying hardware support. A number of areas also must be addressed. Two MMU virtualization methods include shadowing and direct mode.

With shadowing, the hypervisor traps all MMU calls. Though more secure, it can double the number of page faults and interfere with guest OS page coloring methodologies. Direct mode is more efficient. However, it assumes that the guest OS is secure because the hypervisor provides the guest OS with access to the hardware.

At this point, Xen lacks good management tools. Xen Source is one company working to improve the quantity and quality of tools that are already found in full virtualization systems, such as those from VMware and IBM.

Xen 3.0 looms on the horizon. It will include a number of new features, including symmetric multiprocessing (SMP) guest support, large memory support, and 64-bit x86 processors. It also will support the VT-x extensions in Intel's new processors. Open Source Development Labs (ODSL) will help with the changes. Furthermore, companies like Intel have shown lots of interest in Xen. Novell now ships SuSE Linux with Xen as well.

User-mode OSs constitute a special case of para-virtualization. User Mode Linux (UML) is one of the best known implementations using this approach. User-mode OSs are modified like a para-virtualization guest OS, but the target interface is the same as that of applications running on the host OS. This leads to a less efficient implementation, since the interface is designed for applications, not a guest OS. Functionally, UML and Linux on a virtual system are the same, but UML's performance is lower.

FULL VIRTUALIZATION Architecture: OS partitions Hardware: virtual-machine support (MMU and IOMU) Performance: depends on virtual-machine support overhead Pros: easy to implement if hardware included; unmodified guest OSs Cons: requires hardware support; possibly significant overhead; may require runtime modification of guest OS or applications Examples: VM/370, VMware, Microsoft Virtual Server FULL VIRTUALIZATION Get the right hardware, and full virtualization is possible. Possible doesn't always mean efficient or practical, though. Take the x86 environment. It started without virtual-memory management and eventually grew into what we have today--a system that can provide full virtualization but at a significant performance and complexity cost.

Many systems have been designed from the ground up with virtualization in mind. Most notable of these systems are IBM's VM/370 and AS/400. IBM's CP (control program) handled the hardware virtual-machine monitor. The monitor could manage any number of virtual machines that were exact copies of the underlying hardware.

As microprocessors move into the realm of virtualization, it's not surprising to find them taking on characteristics of mainframe processors.

New hardware, including Intel's VT-x and AMD's Pacifica, will significantly boost efficiency for full virtualization support. Among the improved features accompanying these chips are streamlined handling of privileged instructions and better handling of I/O virtualization.

Many embedded chips already offer the kinds of features that are finding their way onto x86 processors. IBM's and Freescale's PowerPC, Sun's Sparc, and MIPS Technologies' namesake provide platforms for virtualized environments. Of course, not all solutions need or provide this level of virtualization, but the architectures support it.

Full virtualization exposes a virtual machine to a guest OS, including the various privileged instructions to manage, say, virtual memory. It effectively mirrors the kind of virtualization an OS provides to an application. This means at least three levels of execution are necessary, versus the two needed for a single-image virtual-memory OS. Processors like the x86, which already have four levels of virtualization and security, must add at least one more level to provide a fully virtualized solution.

Lacking this level of support hasn't stymied virtualization on x86 processors, though. Microsoft's Virtual Server and VMware's line of products support the current range of x86 processors. They use tricks to get the necessary performance and security, trapping, and patching. Patching the OS or application code hikes performance while hiding the underlying system. Likewise, segmentation protects Xen, because switching page tables on a standard x86 is too slow.

There are some downsides with full virtualization. A system must be able to virtualize all I/O operations at a low level, since device drivers can still reside in the OS environment. It's possible to replace device drivers with stubs, much like para-virtualization environments. But the whole idea of running fully virtualized is to minimize the changes to the guests.

Virtualization isn't a new technology. Still, it's becoming more practical on smaller platforms thanks to the increasing power of microprocessors and the greater need to incorporate multiple OSs on a platform.

The topic of virtualization is quite complex, too. If you have the time and the interest, check out the book Virtual Machines: Versatile Platforms for Systems and Processes by James Smith and Ravi Nair. For my review of the book, go to and see ED Online 10765.

NATIVE SINGLE OS IMAGE Architecture: application partitions Hardware: memory management unit (MMU) Pros: easy to implement Cons: provides only application isolation Examples: most RTOSs on processors with MMU

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.


To join the conversation, and become an exclusive member of Electronic Design, create an account today!