Electronic Design

Partition Hybrid Linux/RTOS-Based Systems

Analyzing "hard" and "soft" real-time deadlines is critical when determining whether to assign application-software containers to Linux or an RTOS.

Partitioning a multiprocessor system poses a challenge to embedded-systems architectural designers. This is particularly true in complex multifaceted systems that must provide both "hard" time-critical functionality and large-scale information-processing services. To deal with these disparate requirements, such systems often contain a mix of different processors to handle hard real-time aspects, such as digital signal processors (DSPs), plus powerful conventional processors (CPUs) for computation-intensive information processing aspects of the system.

Different operating systems, such as embedded Linux and real-time operating systems (RTOSs) best support these different processors in achieving their disparate objectives. Of course, software running under one operating system on one type of processor may well need to communicate with software running on another type of processor with a different operating system. Hence, a globally uniform communication mechanism to support such interprocessor communication is desirable.

High-Level Partioning Of Complex Systems
One methodic approach to multiprocessor software design is the CODARTS/DA approach of Hassan Gomaa (Software Design Methods for Concurrent and Real-Time Systems, Addison Wesley SEI Series in Software Engineering, Reading Mass., 1993. ISBN: 0201525771). CODARTS/DA is short for COncurrent Design Approach for Real-Time Systems/Distributed Applications. Many of its ideas can be brought forward and tweaked for use with new technologies, such as heterogeneous multicore system-on-a-chip (SoC) hardware platforms that may contain both DSPs and powerful conventional CPUs in an integrated package.

When doing system partitioning, eventually one wants to identify the software "tasks" of the system. Each task will be single-threaded (internally sequential) and can run in parallel with one another in a concurrent software system. However, this is not necessarily the first goal of system partitioning. In highly complex embedded systems, the system may eventually be implemented as hundreds or thousands of concurrent tasks, each possibly containing tens or hundreds of software functions. Those amounts of tasks and functions make decomposition of a system directly into tasks unwieldy. Hence, it's best to approach the identification software tasks gradually. Initially, that involves identifying larger-scale "chunks" of system functionality that may later be broken down into individual tasks.

Fortunately, many of today's operating systems are embedded, and can provide for such a "chunking" mechanism. Each chunk can be considered a "container" that holds a number of software tasks (Fig. 1).

Different embedded operating systems have different naming conventions for these chunks or containers. For example, the containers are called "Processes" in Embedded Linux, and the tasks are called "Threads." On the other hand, the OSE RTOS calls containers "Blocks" and tasks are "Processes." (Be careful of the differing terminologies. The word "process" can mean vastly different things to the particular OS.) Other RTOSs may refer to their containers by names such as "Address Spaces" or "Partitions." Some RTOSs lack a chunking or container mechanism altogether.

Embedded operating systems that support such chunking or containerization make it possible to set up memory-protection barriers between the containers (in conjunction with processor silicon providing appropriate hardware MMU support), as shown in Figure 2.

In this way, the tasks collected in one container are protected against many failures caused by the incorrect operation of tasks in other containers. For instance, a task attempting to point a stray pointer across an intercontainer memory-protection barrier and overwrite data or program memory on the other side would be barred from doing so. Consequently, a fault would be declared. Each container also could have its own local pool of RAM memory buffers. Therefore, tasks in other containers could not exhaust buffer capacity even if they misbehaved badly. As a result, faults would be localized rather than damaging other containers.

Identifying these containers for tasks is an early step in the design of a complex embedded system. Each container will eventually hold a number of software tasks that together perform one major service. This service should be as independent as possible of services provided by other containers. Decomposing a system into containers is closely tied to the application services that the system must provide, as well as to the problems it's expected to solve. The content and structure of the project's requirements-specification document becomes a good guide for initially decomposing the system into containers. In many complex systems, containers typically supply the following kinds of major services:

  • Automatic control
  • Coordination of other services
  • Data collection
  • Data analysis
  • Server (often containing data stores or I/O devices)
  • User services (e.g., operator interface)
  • System services (task scheduling, network communication, file management)
  • Application Example
    A good example is a medical bedside system for a hospital intensive-care application, which could well contain a number of containers of various kinds (Fig. 3). Real-time medical data is acquired in a "patient vital-signs data acquisition" container. It would regularly sample various incoming signals, including core body temperature, peripheral limb temperature, blood pressures at several locations, respiration, blood oxygen concentration, ECG (electrical signals from the heart), cardiac arrhythmia (sequences of abnormal heartbeats), and perhaps EEG ("brain waves"). Note that this one container provides a much larger service than simply handling a single kind of medical measurement.

    If this medical system also were to be involved in the computerized administration of fluids, such as drugs or nutrition to a patient, the system then would need a separate "fluids infusion control" automatic-control container. If the system were to interact with doctors and nurses through voice input and output, it then would also need a "human voice communication" user services container. If the system is going to handle medical records of patients and documentation of their stay in the intensive care unit, the chore can fall to a "patient data" store container.

    Also, if the system were smart enough to check for drug interactions with other drugs and their interactions with the patient's condition and nutrition, this would be done in a "drug interactions checking" coordination container. A separate "drug interactions knowledge base" server container could support processing.

    The various operating-system services supporting each processor in our hybrid multiprocessor embedded system would be in system-services containers-one per processor. Consequently, a processor running embedded Linux would have a "Linux system-services container," and a processor running an RTOS would have an "RTOS system-services container."

    Partitioning Containers Into Tasks
    Once a first-cut partitioning of an application system into high-level containers is accomplished, the containers should be further decomposed internally into the individual tasks that will run within them. Each task is single-threaded (sequential) and can run in parallel with other tasks in its container, as well as in parallel with tasks in other containers in a concurrent software system. Remember that each individual task can possibly contain tens or hundreds of software functions: A task is a unit of concurrency, but it's not the smallest unit of software architectural design. Most tasks consist of a number of functions that are executed sequentially within the task. In embedded systems, individual tasks typically provide the following sorts of services:

  • Handling a single asynchronous input or output device
  • Executing software that must meet a single-time deadline
  • Performing a large calculation
  • Managing a large data store
  • Executing all software that must run at a single point in time
  • Executing all software that must run periodically at a single frequency
  • To continue with our medical intensive-care example, the patient vital-signs data-acquisition container may house a number of tasks that would interact with a number of patient signal sensors. These sensors could supply medical measurements at different frequencies and at different times. Device-handling tasks would include Get Patient Core-Temperature Signal, Get Patient Arterial-Blood-Pressure Signal, and Getting Patient Central Venous-Blood-Pressure Signal.

    The Get Patient Arterial-Blood-Pressure Signal task might provide its output to an additional task called Analyze Arterial Blood Pressure for Systolic, Diastolic, and Mean Values. This extra task would perform some data filtering, waveform analysis, and perhaps pattern recognition on its inputs. We then see that this data-acquisition container could well contain a "pipeline" of several tasks for each sensor from which it's acquiring raw data, resulting in perhaps tens or more tasks within this one container.

    Upon completing a first-cut partitioning of containers into individual tasks, the first-cut higher-level partitioning of the system into containers should be re-evaluated and perhaps modified in an iterative way. This may cause changes in decomposing the containers into tasks, or tasks may be reassigned to different containers. Timing requirements imposed by application needs may also affect this two-level partitioning.

    Timing Requirements
    Many, but not all, parts of complex embedded systems may be required to meet specific time deadlines. This is the crux of "real-time." There are two types of deadlines: "hard" real-time deadlines are those that absolutely must be met every time software runs; "soft" real-time are specified as "targets," which are okay to miss occasionally.

    Most complex systems possess a combination of some hard deadlines, some soft objectives, and some software that's not time-constrained. In addition, different applications or different parts of a single complex application can have time-deadline requirements that may differ by orders of magnitude in the time dimension. Some application time deadlines may be on the order of seconds, while others may be expressed in milliseconds, microseconds, or even nanoseconds.

    The disparate operating-system options available for embedded systems support different portions of these time scales and different sides of the hard-versus-soft real-time dichotomy. For example, standard Linux is not appropriate for addressing hard real-time requirements, because certain sections of Linux will mask interrupts. Furthermore, Linux kernel services (operating-system calls) aren't preemptible until they complete or release the processor by calling the scheduler. Latencies of 10 to 100 ms in magnitude can be expected when using standard Linux. The new Linux 2.6 kernel, although a vast improvement over previous versions, still has worst-case preemption latencies in the hundreds of microseconds.

    Specially modified Linux "preemptible" kernels still aren't fully preemptible. Their non-preemptible portions are sometimes quite lengthy and time-consuming. However, Linux provides an ideal environment for many high-level, information-intensive applications and management services that RTOSs don't support very well.

    RTOS kernels complement the Linux-based operating systems in that they do support hard real-time requirements and deadline requirements down below several hundreds of microseconds. But in contrast to their clear performance advantages when needed, RTOSs are often limited in the number of add-on software components available for use with them. They also are more costly in cases where Linux is an option and weaker in information-intensive processing support.

    Hence, it may be optimal to have some processors in a hybrid or heterogeneous multiprocessor embedded system that's running Linux, while others run an RTOS. Assigning containers of application software to these disparate processors running different operating systems will depend on their timing requirements, in addition to other factors. That's why each container, once identified (as in Figure 3), must be examined for the scales of its time deadlines and its hard-versus-soft real-time requirements. To contain software that must meet hard deadlines below hundreds of microseconds in length, it can only run on an RTOS-based processor. If it only has soft deadlines in the multi-millisecond range or longer, then it could reasonably run on a Linux-based processor. (It also could run on an RTOS-based processor, but other considerations may make Linux more attractive.)

    Some containers may include application software that's necessary to meet several different timing requirements-some that point to an RTOS-based run-time environment, and others that point to a Linux-based run time environment. If any part of a container must meet hard deadlines below hundreds of microseconds in length, then the container can only run on an RTOS-based processor.

    Another alternative could be to split the container in two. The first of the two new containers would contain all of the software that must meet hard deadlines below hundreds of microseconds in length, and it could only run on an RTOS-based processor. The second of the two new containers would contain no software with hard deadlines or deadlines below the multi-millisecond range, and it could run on a Linux-based processor.

    An example of a container that has several different timing requirements is the Patient Vital-Signs Data Acquisition service (Fig. 3, again). It needs to sample patient-generated signals such as the ECG at 1-ms intervals. But the sampling can't be done at any time during each 1-ms interval. Rather, it must be done systematically during a small window in the first 10 ms\[microseconds\] of each 1-ms interval. So, the Get Patient ECG Signal task of this container really has a 10-ms\[microseconds\] timing requirement.

    On the other hand, this same container also may have a requirement to do cardiac-arrhythmia analysis of the very same ECG signal. This is some cardiology-specific complex pattern analysis involving a sequence of heartbeats, including lots of mathematics and matching to contents of several large databases. Cardiologists and nurses would not expect to get the results of such an analysis until at least one heartbeat after the end of the cardiac arrhythmia event.

    Since human hearts beat no more quickly than 300 times per minute, the deadline for cardiac-arrhythmia analysis is 200 ms. But if results occasionally weren't ready for an additional 100 to 200 ms, they would still be welcome to hospital staff. Thus, it is a soft real-time requirement for which a Linux-based processor would do just fine. A separate new container named Cardiac Arrhythmia Analysis could be set up, and the tasks that dealt with this analysis would be taken out of the Patient Vital-Signs Data Acquisition container and put into this new container (Fig. 4). This new Cardiac Arrhythmia Analysis container would be a good example of a major data-analysis service.

    Another example of a container that is appropriate for a Linux-based processor is Human Voice Communication. The software to do human-voice comprehension is excruciatingly complex and challenging, perhaps more so than cardiac-arrhythmia analysis. Users of such a service would be ecstatic if it yielded a very high level of comprehension, even if it took several hundreds of milliseconds of processing. (Users would begin to get upset if there were no basic acknowledgement of the voice input within about 200 ms and incomplete comprehension after about two full seconds.) Once again, this is a soft real-time requirement for which a Linux-based processor would do just fine.

    Intertask And Intercontainer Communication
    After a complex embedded application is partitioned into containers, the containers decomposed into tasks, and timing issues are considered, the next step is designing the communication between the various tasks. This step involves communication to other tasks within the same container and communication to tasks in other containers. Because the containers are partitioned for minimal interdependency, there should not be large flows of intercontainer communication.

    For the remaining required communications, message passing is the preferable choice. Message passing is conceptually simple and intuitive, as well as widely available in a variety of operating systems. If asynchronous message passing is chosen for intertask and intercontainer communication, the result is a loosely coupled design that prevents the propagation of many faults. Other traditional operating-system mechanisms for intertask communication, such as semaphores, mutexes, event flags, or Unix-style signals, are error-prone and become unwieldy or unfeasible when extended into the realm of distributed and multicore embedded systems.

    Our next system design step will be to assign various containers and their tasks to different processors. Ideally, precisely the same asynchronous message-passing model could be used for communications between tasks within the same processor, as well as for communications between tasks residing on different processors. This can be done using an RTOS component known as a Link Handler. For Linux-based processors, the message-passing connection to RTOSs can be supported by an operating-system component known as a message-passing Gateway.

    Mapping Into Disparate Processors
    Once the intertask and intercontainer communications are designed, the next system design step is to allocate the various chunks of software to different processors. This allocation must be done so that complete containers are assigned to individual processors. It's permissible to assign more than one container to a processor (if the processor has the capacity for them), but a single container should never be split between two processors. In a heterogeneous multiprocessor system with Linux running on some processors and an RTOS running on others, assigning containers to processors must account for the deadlines and hard/soft real-time needs of each container. This will ensure that only Linux-compatible containers are assigned to Linux-based processors.

    It's okay to assign a Linux-compatible container to an RTOS-based processor, if the processor has the capabilities and capacities to support it. But it's not okay to assign a container that must meet hard deadlines shorter than hundreds of microseconds to a Linux-based processor. Figure 5 shows the assignment of containers to processors in our medical intensive-care example.

    The Linux-based processor at the top of the diagram runs five large containers, each with only soft real-time requirements and deadlines in the range of milliseconds. Such containers are usually mapped into what Linux terms Processes. We see that four of them are memory-protected from one another. The Drug Interactions Checking container and the Drug Interactions Knowledge Base container are shown without memory protection between them. That's because these two containers are quite closely tied to one another, and we may expect lots of data traffic between them. If a major efficiency benefit can be gained by eliminating memory protection between them, such as the ability to transfer messages without them being copied in transit by the operating system, then perhaps the two containers could map into a single Linux Process.

    The RTOS-based processors at the bottom of the diagram of Figure 5 each contain hard microsecond-range deadline containers and soft real-time containers. Memory-protection barriers separate the containers because they are critical to a patient's safety.

    Hide comments

    Comments

    • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

    Plain text

    • No HTML tags allowed.
    • Web page addresses and e-mail addresses turn into links automatically.
    • Lines and paragraphs break automatically.
    Publish