In terms of language popularity, much of the late 90s and early 2000s revolved around so called "managed" languages, such as Java or C#. Currently however, industry seems to be turning back more and more to native languages, and in particular two mainstream ones - C++ and Ada. Interestingly, both of these languages have been undergoing major revisions, resulting in the almost synchronized releases of C++ 11 and Ada 2012. This article will describe the most notable additions to the two languages, and show how they relate one to the other.
Table of Contents
C++ and Ada have a long and intricate history. They emerged roughly at the same time in the early 80s and each was designed as a general-purpose language. Each was touted, at least among its most vocal advocates, as the language of its time, and each failed. In the late 90s and beginning of the 2000s, the language trend had migrated to the world of Java, or Java-like languages (such as C#). And once again, these languages failed to live up to their billing.
So-called managed languages are of course very appealing on paper. They're generally easier to learn, allow developers to avoid thinking about technical issues such as memory management, and provide lots of features allowing rapid prototyping and development. Hiding this complexity unfortunately has major drawbacks - as hiding isn't quite like removing. Requiring automatic dynamic memory management ("garbage collection") is several orders of magnitude less efficient than letting developers use stack and manual memory management, and allowing fast development with little architectural design can easily end with applications that are difficult to maintain over time.
One cycle seems to be closing, and industry is realizing that rapid development doesn't really matter when the end code doesn't fit the purpose it was developed for. Where safety is important, the past few years have seen a growing interest in Ada, with more and more people using this language to build highly reliable, secure and safe applications. This is in particularly true for industries where safety and security are mandated such as avionics, space, railroad or military. As a matter of fact, with the significant increase in the role of software, this club has grown recently and welcomed newcomers such as the medical devices and automotive industries, both of which are looking at adopting Ada or similar safety-centric languages.
From the efficiency point of view, organizations that used to adopt Java/C# as a standard have come back on their decision and are now returning to C++ when speed of execution is an issue. The example of Microsoft is particularly interesting in this regard. According to Herb Sutter (in his talk at this year's Lang.Next conference) while they were pretty much a one-language company with a policy of all new developments being done in C#, they have invested serious resources in the 2011 revision of the C++ language and are now letting projects decide what suits them best depending on their constraints.
It's no surprise then that, following this renewal of interest, both C++ and Ada have been going through major revisions - C++ with C++11 and Ada with Ada 2012 - but the synchronization is amusing. It is no surprise either that these two revisions are promoting features for which they are well-known; i.e., safety-related features for Ada 2012 and efficiency-related features for C++ 11.
This does not mean that programs developed in Ada are by any means less efficient than those developed in C++. On the contrary, programs developed in Ada are more efficient that those developed in C++ for applications that require Ada qualities: i.e., applications that need to be maintained over long periods of time and/or on complex embedded targets, for large systems. The fit between the application and the language is the key.
Pointer safety has always been one of the main sources of program vulnerabilities, and all modern languages have been looking at mitigation strategies. The first version of Ada was extremely conservative with regard to what could be done with pointers, and the pointer mechanism was substantially extended in the next revision. In comparison to C, C++ has somewhat reduced the need of pointers in a number of situations, thanks to references. But no matter how hard you try, no matter which language you use, there are still programs that require proper indirect access semantics.
C++11 offers a standard smart pointer mechanism allowing a pointer to track how many times it's referenced, with its contents deallocated when there are no further references. For the sake of efficiency, this is not a garbage collector à la Java. Developers still have the burden of making sure that they understand the structure of the code, to solve circular dependence cases for example. However, this is undoubtedly an improvement compared to the previous situation, allowing much cleaner code while maintaining comparable performance. Ada has done a very good job at mitigating the need of pointers and has fewer situations where pointers or even explicit references are required. However, if needed, smart pointers can be easily implemented, in particular through the Ada 2012 user-defined reference capability. Some libraries provided with compilers such as GNAT already provide these by default.
Other capabilities that Ada has provided in this area for a long time include accessibility checks and memory pools. The former allows the developer to avoid a number of dangling reference issues by making sure that a pointer cannot point to a piece of memory that has a shorter lifetime than itself. This is particularly useful when pointing to data located on the stack. The latter allow the developer to specify an area for allocating / deallocating storage (the "memory pool"), thus permitting the monitoring of its usage or the deallocation of an area as a whole. These two capabilities were available long before Ada 2012, but they have been greatly enhanced by the new version of the language.
So it's fair to say that the two languages are making progress on heap management issues from different angles. Some cross fertilization could probably be beneficial here. A word to the wise...
There is a major paradigm difference between the way Ada and C++ specify the parameter passing conventions. In C++ a programmer can specify how a parameter is passed (as a copy, as a reference or as a pointer). Ada takes care of this automatically but requires identifying how the parameter is used (as an input, as a result, or as an input and result). With C++11, the language goes one step further and introduces a new reference-passing mode, the rvalue reference. In short this states that the value from which the reference is built is temporary, and that it will not exist outside of a given call lifetime. This feature allows great optimization of patterns such as constructor by copy and assignments operators that are very common in C++ and probably one of the biggest performance-bottlenecks in the standard library.
On the other hand, Ada 2012 now allows extension of the parameter mode with the "aliased" keyword, forcing a parameter to be passed by reference while previously it was the compiler that decided the most efficient mode. Has C++ been inspired by Ada when introducing a usage-related mode and Ada by C++ when introducing an implementation-related mode?
What is striking to an Ada developer is that the C++ notion of safety concerns heap allocation safety only. With this view, Java and its garbage collector represent the pinnacle of software safety, and using smart pointers (as implemented in C++11) is one way to move in that direction. Safety-critical Ada programs often ban pointers from the start, or extensively constrain their usage, and consider safety as the ability to write verifiable contracts on language entities.
In previous versions of the language, strong typing, parameter modes, encapsulation, generic (also known as templates) specifications, and value ranges already provided a comprehensive set of static and dynamic contracts that developers can define for their software. Ada 2012 brings this to a whole new level by adding the notions of preconditions, postconditions, invariants, and predicates. These contracts can formally define the requirements that must be fulfilled before a subprogram call as well as the expectations that may be assumed after the call. They will also allow the developer to specify constraints on a type that have to be fulfilled throughout the entire application. Technically, contracts take the form of Boolean expressions checked at well-defined places in the application.
The benefits of these features are multiple. To start with, having the ability to specify contracts using the language's standard syntax, and to check that they compile properly, ensures that they stay consistent with the program itself. Contracts that are written in the form of comments tend to degrade over time as the code evolves, since keeping them up-to-date during code changes is rather tedious. Although one would probably not run the contract-checking tests after deployment, in order to avoid additional exception cases and degraded performance, they can be activated on testing thus ensuring that they are actually enforced. They can also be used as an additional data source by static analysis tools, to provide more accurate results and to check more properties. Going even one step further, with the proper set of constraints, they can even be formally proven to be correct, giving yet additional confidence that the program will meet its safety and/or security requirements.
These capabilities, which are rather unique in the area of industrial-ready programming languages, have attracted considerable interest in the high-integrity application community. They will undoubtedly be less popular for those who are looking for rapid prototyping and who can accept a higher rate of defects. Different communities, different needs, different answers.
This different point of view regarding software safety is exemplified by the C++ 11-introduction of static type inference. Using the auto keyword instead of a type in a variable declaration, the variable type will automatically be deduced from an initialization expression. This solves a real problem in the C++ way of developing applications, where types can be very long to write and to understand when they involve namespaces and generics. Needless to say, for the Ada developer, a type is a contract and it's a heresy to even consider having the compiler deciding it for you. Again, no black and white here, different domains leading to different features...
The absence of custom iterations, and the famous "for each" loop, has definitely been a source of frustration for a lot of developers. Whoever has tasted iterators in other languages will have a hard time going back to the awkward pattern requiring manually declaring an iterator, writing a while loop with the correct condition, extracting the element from the iterator, and remembering to put the "next" call at the end of the loop. Associated with the "for each" kind of loop, this feature sounds like a no-brainer. After its introduction in Java 1.5, it is finally part of the C++ and Ada feature set.
In short, this allows the developer to write a loop over the contents of any kind of container (array, list, map, set ...) using a "for" loop abstracting away the actual container representation, serving the next element at each iteration. Readability is greatly enhanced, and in the case of Ada, formal proof becomes much easier to perform on such code. For example, proving that the loop terminates becomes trivial.
Nice to see that these two players have drawn the same conclusions at the same time!
Language support for concurrency has been around since the very first version of Ada through tasking (mapping a thread). It was then enhanced in Ada 95 with protected objects (which provide advanced data protection against concurrent access). Ravenscar - a deterministic, deadlock-safe, subset of the Ada tasking features used since the late 90s - was standardized in Ada 2005 for monocore systems. It can now be extended to multicores thanks to the Ada 2012 ability to select the core on which a given task is running. It's nice to see C++11 standardizing concurrency management, namely thread and mutex. There's no doubt that this will improve portability of such code, whereas previously a concurrent program had to rely either on operating system primitives or vendor extensions. The level of the services provided by the new language is more or less comparable to what is available in Java. It allows manipulation of low-level notions such as thread or lock, abstracting them away from the actual system API. The introduction of lambda calculus in C++11 provides a more function-oriented interface than Java though, which requires the use of classes.
As of today, C++ thread capabilities can be viewed as a low-level subset of Ada's tasking facilities. It still lacks Ada's higher-level abstractions for processing synchronization (rendezvous), advanced data structure protection (protected objects) and safe-by-construction multi-tasking (Ravenscar). No doubt, however, that the enhancements provided in the C++11 version are already a huge improvement over what was available previously.
The most successful software projects are those that have been able to select the languages that were the most appropriate to their need. Looking at the big picture, seeing Ada and C++ re-emerging at the same time along with their new revisions and targeting what they've been designed for is definitely a great sign of the market maturity.
One could argue that most of the developers that come out of university are Java-trained, and might find it difficult to shift to another formalism. Training will be a requirement but training is always a requirement, especially when high performance levels and/or safety is required. Learning another language is the easy part - and having the right language from the start will help newcomers become more productive more quickly. Not to mention the benefits in terms of software quality. As a final word, more than ever, with regards to today's software engineering challenges, one size doesn't fit it all.