Kaspersky Lab is a well-known security and anti-virus vendor bumping shoulders with the likes of Trend Micro, Symantec, McAfee, and Microsoft. These companies do not subscribe to “security by obscurity,” but they do hold on tightly to their source code. Not that this is at all surprising, given that most commercial entities with closed-source solutions do the same. These are essentially trade secrets, and letting them into the wild can have financial consequences.
And that’s why, when Kaspersky Lab recently offered to turn its source code over to the U.S. government, it was so newsworthy. But while a bit unprecedented, there was a method behind the madness. First, a few details: Kaspersky Lab is based in Russia. The U.S. government is not on great terms with the Russian government these days, and there is the possibility that Kaspersky Lab could be colluding with the latter. U.S. government intelligence agencies are worried about this and want to recommend that Kaspersky Lab’s software not be used because of cybersecurity considerations.
So, to prevent a potential loss of sales (in all likelihood, not just to the U.S. government), Kaspersky Lab has offered it source code up to the U.S. so a security audit could be performed. Unfortunately, simply doing an evaluation of the current software does not address the difficulty of dealing with verification of the many instances of software Kaspersky Lab has, the issue of almost daily updates, and so on. Security and anti-virus applications are one area where continuous updates are required. Any analysis would need to be done on a locked-down version with incremental changes evaluated once the evaluation was complete, not to mention the ongoing checks that would be necessary. Going open-source is probably not an option.
I will let others track how this will play out. What I want to bring up is the issue of source code and how its distribution will be affected by similar security and safety issues, especially with the wave of IoT solutions. It is not uncommon for developers to require the source code of software they license, but this is often something that is archived—making it is possible to recreate a system at a later date, in addition to being able to use it to track down new bugs.
The thing is, most developers will not even care to look at this code, and most will not even understand its operation, let alone the design considerations and requirements. This is especially true for security and safety software, and even communication software where an extensive background in that area is needed to understand the ramifications of the code.
Turning over source code as part of a licensing agreement for its use is one thing, but turning it over to a customer or potential customer is another matter altogether. This is what Kaspersky Lab is doing to assure that it has the possibility of selling its products.
The question is whether your company would follow suit. This is less of an issue with consumer products as consumers, in general, have neither the background, capability, or interest in delving into the hardware or software. They will typically depend upon a brand’s reputation to deliver a suitable product. Commercial entities buying products may be a little better off, but there is the cost/benefit discussion, even if the entity has the capability to perform an audit or evaluation.
Keep in mind that most End User License Agreements (EULA) deliver software as-is. Support may include updates or bug fixes for a specified period, but source code, safety, and security tend to crop up only occasionally.
So who can you trust? That’s a difficult thing to answer, because there are often many involved in a software supply chain. Vendors are addressing this issue with techniques like signed software and hardware that has unique identifiers as well as secure manufacturing, since many devices are built and programmed by third parties.
Trusting no one means only using software you write yourself and using open-source tools that you have verified yourself. There may be a few people who can or will do that. Then there are the rest of us.
It’s a lot like the secure boot process. There is a path of trust that is rooted at some point—in this case, some security hardware. The challenge is that not even this path guarantees that a system cannot be compromised, because software farther down the chain may have been exploited.
So back to the question: Would you turn over your source code?
If so, under what conditions? Keep in mind that potential attack surfaces increase as software gets distributed and viewed by others. The implications of spreading around source code may not always be apparent at first glance.
Just something to think about.