Embedded applications are becoming complex and sophisticated to meet several objectives. First, applications need to improve efficiency, which requires significant controller performance to run sophisticated algorithms. Next, the ubiquitous internet availability is enabling embedded applications to become “smarter” and more “connected.” The third objective is to reduce cost by integrating several functions like sensor interfacing, connectivity, motor control, digital power conversion, security, and safety in a single controller.
Such a high level of integration requires respective domain experts to handle specific functional areas or modules; then, multiple functions need to be integrated into an end application. Often, multinational companies having their teams spread across the globe makes it even more important that various modules be able to be designed separately and integrated seamlessly with ease to reduce the development risk and efforts.
First, let’s look at how the objective of improving energy efficiency requires increased controller performance. Consider the example of a motor-control application. The industry has been moving away from brushed dc motors, which offer 75-80% efficiency, and toward brushless dc (BLDC) motors or the newer permanent-magnet synchronous motors (PMSMs). These motors offer up to 85-90% better efficiency, reduced acoustical noise, and increased product lifespan.
A typical brushed dc motor control requires very simple direction and speed-control techniques that can be accomplished using an entry-level 8-bit microcontroller. In comparison, controlling a sensorless BLDC or PMSM motor with “field-oriented control” (FOC) is more sophisticated and computationally intensive. It allows for close control of energy used by the motor over a wide range of load or speed, and helps significantly improve the efficiency. Additional control algorithms may also be implemented based on application requirements, like “Rotor Stall Detection and Recovery,” “Wind-milling,” “PI Loop Saturation and Anti-windup,” “Flux Weakening,” and “Maximum Torque per Ampere,” which help improve the performance and response to a dynamic load, plus increase the overall efficiency.
All of these advanced control techniques are computationally intensive, involving math operations such as divide, multiply, square-root, and trigonometric operations, requiring significant central processing unit (CPU) bandwidth. Because these control functions need to be executed periodically at a high frequency, it’s necessary that the CPU gets allocated at a specific time interval.
Such tight control-loop execution can take up most of the CPU bandwidth and impact other time-critical functions in a complex application. An embedded developer will have limited flexibility to add any additional functionalities like communication, safety monitoring, and system and housekeeping functions that could interfere with the time-critical control of the motor. The challenge increases in digital power applications, where the time-critical control-loop functions need to be executed at an even higher frequency.
Now, let’s consider the next objective driven by internet or cloud connectivity. The latest industry trend is for applications to be “smart” and “connected,” offering intelligence and accessibility from anywhere. These requirements demand embedded applications to include multiple software stacks such as:
- The main application function software. In our example, this function implements motor control, housekeeping, and user-interface operations that are commonly required in most of the applications.
- The communication software running the necessary network application protocols for connectivity.
- The security software for IP protection, privacy, data integrity, authenticity, and access control, and for thwarting any hacking possibilities.
- If applications involve human operations and may cause bodily injuries due to malfunctioning, then even the functional safety software needs to be part of such safety-critical
- Some of the end applications may also have customization requirements where certain features will be unique to specific variants targeted for different market segments.
Today’s applications are increasingly “smart” and “connected.” (Courtesy of Microchip)
All of these function requirements necessitate various domain expert teams to be involved in developing respective software stacks and be able to optimally and quickly integrate them into an end application. Experts from multiple domains will need to coordinate very closely to design and implement an end application. This scenario gets further complicated in multinational companies where the expert teams will be spread across the globe.
Finally, cost optimization is an important objective that’s common to all end applications. Often, embedded engineers will not have the budget to consider a multi-microcontroller design, where an individual software stack can be executed on different microcontrollers with very little coordination. Going for a single microcontroller design with very high integration will be the most optimal solution. This further enables cost reduction due to a compact PCB design and a reduced number of external components like crystal oscillators and passive components.
What Are the Development Challenges?
To implement sophisticated algorithms and execute multiple software stacks, embedded designers often choose a higher-performance microcontroller. However, this may not be the best choice due to the challenges associated with time-critical execution, multiple software stack development, integration, and testing.
A simple scheduler or a real-time operating system (RTOS) may serve the purpose of scheduling and executing multiple tasks from different stacks on a high-performance CPU in a time-spliced manner. But, a scheduler or an RTOS adds overhead that consumes CPU bandwidth, memory, and other microcontroller resources. The time-splicing also increases switching overhead, reducing the effective CPU utilization.
The scenario gets further complicated when two time-critical complex control loops need to be executed periodically at a precise and overlapping time interval or when two asynchronous safety-critical functions must be executed simultaneously in real time. In such cases, considering an even higher-performance microcontroller will not always serve the system requirements.
Even if a high-performance single core microcontroller has enough CPU bandwidth to accommodate multiple software stacks, maybe together with an RTOS, there are many other design complications to consider. Developing, integrating, and testing multiple software stacks needs a considerable amount of coordination among subject matter experts. It requires designing a compatible and modular software architecture that dynamically shares resources and exchanges information. The complications further increase if any of the legacy stacks doesn’t have compatible architecture:
- Legacy stacks may have different architectures based on either polling mode or interrupt mode.
- Legacy stacks may be using the same microcontroller’s resources, which now need to be shared without any conflicts to avoid hazards like race condition and deadlock.
- Stacks may have several common global variables and functions with the same names.
- Each stack may function perfectly when executed individually but may malfunction on integration. Debugging such an integrated solution will be a nightmare that increases the development time.
An already available standalone stack may not always help in reducing the development time when implemented on a single-core microcontroller. All of these challenges pose significant development risk and increase the time to market.
A dual-core controller helps to improve efficiency, simplify development efforts, and reduce cost with the following offerings:
- It offers higher performance than a similar single-core controller operating at twice the speed and is ideal for applications with two or more time-critical
- It simplifies software development with dual independent cores that enable geographically dispersed software development; seamless integration with very minimal coordination; and easy feature customization across multiple variants of a product line.
Dual-Core Controller: Better Performance
A dual-core controller facilitates higher software integration by allowing different functions to execute on two independent cores. It’s particularly helpful if an application requires execution of two time-critical functions periodically at a precise time or as a response to asynchronous events. With each time-critical function executing on two different independent cores, there will be no contention between the functions. This improves the overall CPU utilization due to reduced or no context switching overheads between the functions.
Many dual-core controllers come with dedicated resources that further reduce the switching and arbitration overheads. Some of the dual-core controllers also feature dedicated fast Program RAM (PRAM) coupled to one of the cores, typically to the slave core, which further improves the performance. Thus, a dual-core controller offers higher performance than a similar single-core controller operating at twice the speed.
Dual-Core Controller: Simplified Development
Many dual-core controllers offer dedicated memory, peripherals, and debug support with each core. A flexible resource-management scheme further allows shared resources allocation to either of the cores as per an application’s requirement.
A typical dual independent core controller block diagram. (Courtesy of Microchip)
Such a microcontroller architecture enables independent software development with very minimal coordination between domain experts and facilitates easy integration. A dual-core controller particularly simplifies integrating two software stacks that are based on different architectures or require similar microcontrollers’ resources, which can now run on two independent cores.
It’s similar to developing the stacks to execute on two different controllers, but with the benefits of improved performance, optimal resource utilization, and reduced cost. This eliminates any complications associated with the stack integration, time-spliced resource sharing, and the associated hazard conditions.
A dual-core controller also enables easy post-integration debugging as each core comes with its own debug interfaces. Due to minimized dependencies between the stacks, the debugging gets extremely simplified to isolate issues and rectify them. Offering so many advantages, a dual-core controller significantly reduces the development risk and time to market.
To add to the list of benefits, a dual-core controller enables easy customization without modifying the main functionality. By architecting the main functionality to run on one core, the custom features can be implemented on another core. All of these offerings of a dual-core controller simplify software design even are multiple teams are involved across the globe, and they enable seamless integration with very minimal coordination efforts.
Dual-Core Controller: Cost Reduction
By offering higher performance, a dual-core controller enables an embedded designer to realize complex applications using a single microcontroller. By simplifying the development, a dual-core controller drastically reduces design time and risk and enables competitive designs with reduced cost and time-to-market.
To practically realize all of the above benefits of a dual-core controller, a little experiment was conducted. In this demo, one of the cores (typically the slave core) implements motor control running a FOC algorithm to control a BLDC motor. To offer a graphical user interface, the other core (the master core) executes a graphics stack to interface an OLED display and implements a system function to interface with the potentiometer and buttons that control the speed and state of the motor.
Shown is a multi-function complex application block diagram. (Courtesy of Microchip)
To demonstrate the design simplicity offered by a dual-core device, the graphics stack and the motor-control software were developed by two different, geographically separated teams. With the flexibility to maintain independent software architecture, very little coordination was required between the two teams. One team with the expertise in motor control could very quickly implement the FOC algorithm to control a BLDC motor. And with the other team having expertise in graphical user interface development, both teams could leverage their experience in respective areas and quickly complete the project. There was very minimal coordination required to establish an agreement that would convey the buttons and potentiometer status between the two cores.
As an extended experiment, both the teams used already available software libraries to implement motor control and the graphics interface. As a result, the project was completed in no time, with very little effort spent on integrating two different legacy stacks. Because of the high performance of the core, a lot of CPU bandwidth was still available on both cores. To push the limits, an OLED display interface was also added on the slave core to showcase dynamic motor parameters without affecting the motor performance. Here’s the live demonstration of the motor-control application with the GUI running on a dual-core digital signal controller (DSC).
An example of a dual-core controller that has all of these benefits is Microchip’s dsPIC33CH128MP508 dual-core DSC. The dual-core dsPIC33CH offers high performance with dedicated memory and application-specific peripherals, suiting it for high-performance embedded, motor-control and digital power-conversion applications. The dual-core in this family enables designers to develop firmware for different system functions separately and then be brought together seamlessly without code blocks interfering with each other.
Harsha Jagadish is Product Marketing Manager, MCU16, for Microchip Technology.