It’s quite hard to dislodge the idea that software is code—and by “code,” I’m referring to the text of high-level and low-level computer languages. A build system converts the text of the code into software.
If you ask embedded system development engineers to show you the software, they will generally open a file of text to show you “the code.” It’s sometimes called “source code,” a phrase that betrays an assumption that this is where it all starts. Code is well-structured text that follows many rules, but it’s still text. Software engineers are writers, maintaining vast libraries of software text. The text is the code is the software.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
But this is wrong. Software is not the text of the language it’s written in. Software is the initial configuration of memory that makes a processor do what it is intended to do. The text of “the code” just helps you create the memory configuration.
I am not anti-code. Computer languages are amazing. They capture meaning and provide visibility of data structures, logic, and methods. They offer quick navigation to every detail. And, they make it possible to build software from components and systems from subsystems. I am a fan. But the technology of systems and the tools to build the software inside these systems moves on.
The Evolving Industry
The embedded systems software community was, with good reason, a laggard in relation to high-level languages. It wasn’t until C combined “low-level” hardware handling capabilities with “high-level” data structures and processing logic that assembler languages were finally displaced from most projects.
But now the time has come for the richer methods applied to so many embedded system development projects to step forward and show how new tools can deliver disruptive change for the better. Systems engineering, model-based development, and simulation can be central to the whole project, including software.
Embedded software developers are, like it or not, at the frontline of some critical challenges for the future:
• Complexity: Are we building systems-of-systems that no one will be able to fix when they break? (It’s never convenient and not always practical to switch it off and switch it on again.)
• Security: As the Internet of Things (IoT) emerges, embedded software will offer hackers and cyber-warfare instigators some of the most attractive entry points. One automotive study1 offers great insight: “In particular, virtually all vulnerabilities emerged at the interface boundaries between code written by distinct organizations.”
• Cost: Do more with less. It doesn’t really matter if this imperative is caused by a genuine shortage of engineers or the scope of the organization’s ambition. The pressure is the same.
Development methods have to improve continually to stay top of these issues. Many areas in the chain of knowledge, skills, technologies, processes, and standards need to move forward. But I want to point to code as a weak link in the battle against all three points: complexity, system security, and cost.
As software developers and their managers know, hands-on access to code is not necessary. Commercial products can make system diagrams the point of interaction.2-12 Experimental tools of this type open up even more new capabilities and possibilities.13 In these tools, diagrams “are” the software, at least in the same way as code in a text file “is” the software. Tools such as these have a central role in a few projects and a partial role in many more.
One development manager of automotive sport electronics products said that waving goodbye to handcrafted source code was a traumatic day, but his team’s transition to 100% models and automatic code generation was absolutely right for the business, because it had led to a big reduction in the time and cost of moving software from one hardware architecture to another.
It is time for these types of tools to take center stage. Compared to high-level languages, they offer a more accessible and easy-to-navigate representation of software, as well as more scope for automation. But observe that I said time for center stage, not every position on the stage. These system diagram-based approaches do not today handle every software development need.
For example, one medical device company used diagram- and model-centric tools for prototyping its projects, which was absolutely the best way to try out various actuator, pump, and sensor configurations. However, the company switched to traditional coding for the final product, partly because of regulatory certifications, but mainly because hospital network interfaces needed to be implemented that way.
Let’s call these tools system-diagram-based tools. They are at approximately the same stages of their lifecycles as 3D CAD when it was starting to eliminate the need for separately crafted 2D mechanical drawings and parts lists. It was the same in electronics as EDA systems transformed hardware design workflows away from manual layout of circuit boards and chips.
In both cases, the transition required a long period of time: several years in electronics, several decades in mechanical CAD. The early stages of transition are difficult, because the change delivers great improvements, but only within a specific scope. Evangelists point to the improvements, but of course it’s not too hard to point to the limitations of scope.
But groups using code generation from system diagrams are getting results that everyone wants. Embedded system development managers report more reuse than before, especially when they have to move from one platform hardware architecture to another, and this is helping control cost.
These managers also feel (but find it hard to demonstrate) that the focus on diagrams is improving their ability to manage interfaces. And, some managers feel that a diagram-centric approach should also help developers see a function in context and therefore make it easier to build and use simulations to help evaluate alternatives.
So what is the justification for moving from a tried-and-tested approach to something new? The engineering teams involved are generally fighting cost-quality-timescale tradeoff battles on every project. They have a vision of future tools that will enable easy navigation and traceability of dependencies across requirements, solutions, and tests.
These tools will enable them to create, assess, and change every artefact in each of these categories, with automatic highlighting of omissions and, perhaps eventually, errors. But every step towards this vision must be a step forward. There is no time to explore dead ends.
Replacing the text representation of software with diagrams reduces errors by making the system-diagram-to-code step automatic. It should also clarify communication, because system diagrams are generally more widely understood than code, especially across hardware/software boundaries.
This should make it more possible to set, implement, and adhere to standards, improving reuse (of diagrams, and therefore code). And, of course, setting up a hybrid environment (some diagram-centric, some code-centric modules) is no great problem, and it can help mitigate the risk of change.
But productivity will be the key to every decision to change development methods. This may mean short-term productivity in the current project, or long-term productivity in the scope and scale of reuse that can be achieved, or both.
So, there is a clear message to vendors seeking to gain share as development teams consider this technology change: create and communicate case studies that show productivity gains. Leading system architects, designers, and developers need these case studies to initiate and justify investment in change.
No one needs to mourn the passing of source code. It will be with us for decades to come, but with a role that is increasingly behind the scenes and at the edges. And software engineers will discover they can draw as well as write.
1. “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” available for download from http://www.autosec.org/publications.html
2. Atego Vantage, http://www.atego.com/products/sysim/
3. Dassault Systèmes ControlBuild, http://www.3ds.com/products-services/catia/capabilities/systems-engineering/embedded-systems/controlbuild/
4. dSPACE TargetLink, http://www.dspace.com/en/pub/home/products/sw/pcgs/targetli.cfm
5. Etas Ascet, http://www.etas.com/en/products/ascet_md_modeling_design.php
6. IBM Rational Rhapsody, http://www-03.ibm.com/software/products/us/en/ratirhap
7. Mathworks Simulink Coder, http://www.mathworks.co.uk/products/simulink-coder/
8. NI Labview, http://www.ni.com/labview/
9. PTC Thingworx, http://www.ptc.com/product/thingworx/
11. Sparx Systems Enterprise Architect http://www.sparxsystems.com/support/faq/code_generation.html
12. Vissim, http://vissim.com/products/vissim/embedded.html
Peter Thorne is managing director for Cambashi. He is responsible for consulting projects related to the new product introduction process, e-business, and other industrial applications of information and communication technologies. He has applied information technology to engineering and manufacturing enterprises for more than 20 years, holding development, marketing, and management positions with both user and vendor organizations. Immediately prior to joining Cambashi in 1996, he headed the U.K. arm of a major IT vendor’s engineering systems business unit, which grew from a small R&D group to a multimillion-dollar profit center under his leadership. He holds a master of the arts degree in natural sciences and computer science from Cambridge University. Also, he is a Chartered Engineer and a member of the British Computer Society. He can be reached at [email protected]