The "Keys" To Securing A System

Aug. 27, 2009
It's not enough to use firewalls or encrypt a couple of data files. Security is only as good as the weakest link.

The standalone system has turned into an endangered species, making security an even more pressing issue among developers. This includes cell phones, Wi- Fi-enabled digital picture frames, and just about everything else. And when it comes to security, it helps to start with the basics.

For starters, information can be exchanged as cleartext, authenticated, or encrypted (Fig. 1). Cleartext typically indicates that the information is text and readable, but it’s often used to refer to information that’s neither signed nor encrypted. Authenticated text is digitally signed. Changing the information voids the signature, so it’s possible to tell if the information has changed. The information is still accessible, unlike encrypted information, which is indecipherable without decrypting it first.

Digital signatures essentially use the related information, often called a message, to encrypt a tag, also called a message digest or simply a digest, attached to the information. The tag will not match if the signature is created after changing the information. Discovery of a change doesn’t usually provide information about who, how, or what may have changed.

Digital signatures can utilize encryption, but they often employ a hash function instead. The difference between a hash function and encryption is that hash functions are one-way operations, whereas encryption is usually a two-way process since the original cleartext can be reconstructed with the proper key and algorithm.

In general, hash functions are faster than encryption. They’re used in a range of application areas, from password storage to communication handshaking. For example, Linux stores user names and passwords in the “passwd” file. This is a cleartext file, but having access to it only provides the user name and the hashed password.

A user can be authenticated using this information by generating a new hash value using a password and then comparing the result with the one in the passwd file. Of course, compromising the passwd file opens a security hole. Most Linux implementations actually keep the hashed passwords in the shadow file. The passwd file is a mirror image, minus the hashed passwords.

Encryption normally uses one or two keys. A single key is used in a symmetrical encryption algorithm. The same key is employed for decryption as well as encryption. Symmetrical encryption is often faster than asymmetric or two-key systems. An asymmetric system uses one key for encryption and a related key for decryption. In this two-key case, one key can’t be recreated by using the other key.

Most public key systems employ two keys (one public, one private) with bidirectional data exchange. This means the holder of one key can encrypt information that’s decryptable by the other. A unidirectional system allows one key for encryption and the other for decryption. In a bidirectional system, the same key can’t be used to encrypt and then decrypt the encrypted data. If both keys are kept secret, then keys essentially identify the holders when information is exchanged.

The RSA public-key algorithm was presented in 1978 by Ron Rivest, Adi Shamir, and Leonard Adleman at MIT. It’s based on two large prime numbers and the fact that factoring a large number is very timeconsuming, making brute force attacks difficult. In a public-key environment, one of the keys is normally made available to interested parties. Likewise, each party normally has its own secret key (more on key exchange later).

Some popular hash algorithms include MD4, MD5, SHA-1, and SHA-256. Common encryption systems include DES (Data Encryption Standard), RSA, and AES (Advanced Encryption Standard). The DES encryption key is 56 bits long, and brute force attacks aren’t easy given the current crop of processors. It was considered secure in the 1970s when it was released. Triple DES (3DES) uses the DES algorithm and keys but addresses the shortcomings of DES. It uses three keys, and the data is encrypted three times.

AES keys can be 128, 192, or 256 bits long. AES is standard fare on microcontrollers these days. It’s employed in wireless standards such as ZigBee and used for full-disk encryption and a host of other applications.

Another method, elliptic curve cryptography (ECC), can use a small key to provide security comparable to other techniques using larger keys. This efficient algorithm can be easily implemented in hardware. Security software often supports one or more encryption and hash algorithms. Likewise, many communication standards allow different algorithms and key sizes to be used. These are normally chosen during the initial handshake.

So much for the basics.

SECURE FROM THE START Security builds from the ground up. If any level is compromised, then the levels above it are typically compromised. This is why security in depth is important. Likewise, partitioning can isolate problems, but only if the partitioning mechanism hasn’t been compromised. Compromising often is accomplished by finding a hole in the security mechanism. This is what happens with worms and viruses that compromise systems by exploiting a defect in an operating system, application, or system configuration.

For most computer systems, physical security and the boot process are the starting point. One approach to securing a system starts with the Trusted Computing Group’s (TCG) Trusted Platform Module (TPM) to boot the system. A TPM contains a secure microcontroller and storage normally found in PCs (Fig. 2). Tamperproof hardware physically protects the device. Breaking open the device results in the loss of stored secure keys.

The TPM checks itself when a system starts and then facilitates the booting of the rest of the system. This can include processing a PIN number entered by a user and authentication of a digitally signed or encrypted boot program normally stored on another device.

Typically, the TPM hands security over to the host, but it can be used for other security related-actions as well. The TPM also contains a unique RSA private key so that a TPM can be identified. And it lets the system digitally sign information, thereby allowing authentication of itself to other systems.

Continue to page 2

In addition, the TPM can be used for remote attestation or identification of devices on the machine. This is accomplished by obtaining the digital characteristics of the hardware and software and then signing the information. This information then can be sent to a third party. It’s generally employed to ensure that a particular version of a music-playing program is being used.

The TCG developed a slightly different approach called Direct Anonymous Attestation (DAA) in response to the lack of anonymity with the basic remote attestation. DAA performs a similar process, but the result only verifies the state of the desired hardware or software. It doesn’t identify the TPM module itself.

The TPM can also provide secure key storage as well as perform encryption and digital signature chores. Keys needn’t be stored on the TPM, since encrypted versions can be saved in other system storage. The TPM can be given this information to extract and decrypt the keys for use.

TPM operations can be incorporated into microcontrollers, not just PCs, which opens up a broader range of consumer devices. Many of the TPM features can be accessed using secure serial memories with I2C/SMBus interfaces. These memories are often a subset of the kinds of functionality found in a TPM, but with lower power requirements and a simpler interface.

KEY MANIPULATION AND MANAGEMENT
Using one key or a pair of keys is just the starting point for encrypted data. More complex access to information is possible with a hierarchy of stored keys (Fig. 3). In this case, User X has Key 7 that can be used to decrypt a portion of the data (keys within keys) to obtain a pair of keys (Keys 4 and 5).

These, in turn, provide access to all of the data—one block and key at a time. The same could be accomplished using Key 6. Of course, additional data could be stored in encrypted blocks 4, 5, and 7. In this case, the holder of Key 6 wouldn’t have access to the entire document, just blocks 1, 2, and 3.

The scenario is a bit contrived, but it does highlight how owners of different keys can gain access to different parts of a document. It’s also more common than one might think. Take Adobe Acrobat documents as an example.

A document can contain fields with controlled access. A user provides one or more keys that are used to gain access to the proper fields, which are then presented to the user. The same general process then signs documents or individual fields. The signing could be with respect to the field or include the content of the field.

A mixed package isn’t always complex. In fact, simple configurations are quite common, such as the other example with User Y in Figure 3. In this case, Key B is used to obtain Key A. This often occurs when symmetrical and asymmetrical keys are mixed, because the former are often faster to use. Consequently, you wind up with a mix of the advantages gained from public-key and symmetrical-key systems.

Co-location of keys and information isn’t a requirement, although it typically happens when dealing with data such as Acrobat files, word-processing documents, or even signed or encrypted XML. Many communication protocols employ a public- key system for initial handshaking and then generate a symmetrical key to be used for subsequent communication. This key is randomly created and discarded when the communication link is closed.

Keeping keys secret is a workable solution when both sides of the system are controlled. For instance, a remote-control garage door opener can be paired with the matching control unit. Of course, this becomes a problem if the control unit gets lost or destroyed, because a duplicate must be made.

The alternative with most encrypted garage door systems is to build a pairing system that requires physical access to both devices. That’s an easy chore for the owner of the garage. Essentially, the opener and the control unit exchange keys during this process and then use them for subsequent communication.

This approach works well for a small number of devices, but a different system is needed for managing a large number of keys. A public-key infrastucture (PKI) is often used in this case when dealing with a public-key system.

The PKI is based around certificate authorities (CAs), which normally are trusted organizations that can sign certificates. A system can employ a single CA or a hierarchy (Fig. 4). Usually, a CA will generate private/public-key pairs and provide them to a user, though it’s possible for a CA to sign certificates that are provided to it. It all depends on who can be trusted.

If public keys are to be provided to a CA, then they’re usually given to a registration authority (RA). The CA assumes that an RA has verified the source. The RA has its own certificate from the CA that it uses when sending information to the CA. Public CAs such as VeriSign provide keys for a price.

Companies like VeriSign are at the base of most PKI systems, but it’s possible to set up a PKI root. The root signs its own certificate and can then provide child CAs by providing them signed certificates. The certificates typically include information in addition to the public key, such as an expiration date.

Continue to page 3

A user can trust a public key from a known and trusted CA, assuming it’s not on a revocation list and the certificate isn’t expired. Managing and scanning a revocation list can be time-consuming, which is why expiration dates are important.

A PKI simply provides a mechanism for knowing the lineage of a particular key. It’s possible for users to generate their own keys and provide the public key to another user. This can open a secure dialog, but it doesn’t guarantee that there’s communication between two known points.

For example, User Y obtains a copy of Public Key 2 for CA 2. User X has a certificate from CA 1. It can give a copy of its certificate to User Y. User Y can send information to User X and know that it can only be decrypted by User X, assuming User X hasn’t been compromised, because User Y knows CA 1 through CA 2 and can verify User X via its certificate from CA 1.

On the other hand, User X can receive a message from User Y and respond securely to it, but User X doesn’t know whether User Y is the source unless User X can obtain User Y’s public key from a known source such as User Y directly. A man-in-the-middle attack can work for part of an exchange because User X doesn’t know about User Y. Still, User Y can detect a bogus response from User X if it obtains User X’s certificate from CA 1. The source doesn’t matter, since it is a signed certificate and any changes can be detected.

Commercial Web sites use signed certificates. Likewise, legal digital signatures must be based on approved CAs. Many government and corporate entities mandate a particular CA for transactions. For other applications, a self-signed certificate is used. By default, most Web server installations have an option for generating a self-signed certificate for Secure Sockets Layer (SSL) communication. Most Web browsers warn about self-signed or expired certificates, though these warnings are usually ignored.

SECURING THE NODE A secure boot was the starting point, and secure communication can be achieved starting with keys authenticated through a CA. The next step is to move up the food chain toward applications.

For embedded microcontrollers, the step may be right into the application. But for most applications, at least an operating system sits in between. In the simplest case, there’s memory protection. A step up includes virtual-memory support. A secure operating system can, in theory, keep applications isolated with this memory protection support. Of course, operating-system architecture and size can provide a host of holes for attackers looking to bypass this security.

Another possible step in the food chain is virtual-machine support. It uses additional hardware to provide the same kind of protection that memory protection or virtual-memory support is designed to provide, but it’s more detailed since it emulates all of the hardware.

A hypervisor is essentially an operating system that manages a virtual-machine environment. The approach is used to manage multiple systems, but can also provide additional security by isolating the operating systems and further isolating the applications. A “thin” hypervisor is usually much simpler than a conventional operating system or a real-time operating system (RTOS), making it less prone to bugs and significantly easier to verify when applying formal methods.

Some virtual-machine systems use a host operating system such as Linux, making this type of verification very difficult. Often, the alternative is to set up a virtual machine to handle the user interface for the virtual-machine system. The virtual machine communicates with the hypervisor to control the underlying system. This, in theory, provides better isolation.

MAKING POLICY Hypervisors provide a system with multiple independent levels of security (MILS) or domain separation. MILS can be a component of a multilevel-security (MLS) system. However, MLS can be implemented without virtual-machine support, assuming a system can be secured including a secure boot process.

Operating systems like SELinux use an MLS security model, which provides finer-grain access control than Linux alone. In general, Linux uses the Linux Security Modules (LSM) framework. LSM can support a variety of security models, including SELinux. Another is Smack (Simplified Mandatory Access Control Kernel).

The basic Linux security module provides authentication and access controls based on a user/group model, and it’s built around the file system. A file can be manipulated from a group, user, or anonymous perspective with read, write, and execute properties individually controllable. Access controls are inherited based on the directory tree.

SELinux is more of a capability-oriented system with policies that can become quite complex. This mandatory access control system separates policy from enforcement. It not only controls access to files and directories, but also to network interfaces and messages.

Access can be associated with applications, not just a user. For example, a file server may only provide access to files or directories that are appropriately tagged for sharing. It tracks applications and any application spawned by one of these applications. Thus, policies can prevent the flow of information from a higher security level to a lower one. A higher-level application can’t simply provide access to a lower-level one without the proper permissions.

One problem with this level of sophistication is using it properly. Bad policies can open holes, allowing data or control to move where it should not be allowed. This is one reason why most systems start with one or more base policy definitions with incremental changes to address the system requirements.

Continue to page 4

SECURE SIGHT A secure boot of a secure operating system is sufficient for some environments. But interaction with users brings the need for identification as well as user-related I/O. Secure displays are one feature that can be found in secure systems for the military (Fig. 5).

Secure aspects of the display system can be implemented in software or hardware. For example, a secure device driver can be set up to only display information from a secure application on the bottom line of the display. Applications that go full screen will not overwrite this secure area. Other tricks include display hardware that would perform this same chore.

The ubiquitous ctrl-alt-delete is a related interface item. In theory, the key combination should only invoke a system application. For Windows, it presented a dialog box that could start the Task Manager among other things. With a secure display, the feedback could be through the secure display area, making it impossible for an application to simulate a response.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!