Chapter 7: The Role of Cryptography in Information Security …

After its human resources, information is an organizations most important asset. As we have seen in previous chapters, security and risk management is data centric. All efforts to protect systems and networks attempt to achieve three outcomes: data availability, integrity, and confidentiality. And as we have also seen, no infrastructure security controls are 100% effective. In a layered security model, it is often necessary to implement one final prevention control wrapped around sensitive information: encryption.

Encryption is not a security panacea. It will not solve all your data-centric security issues. Rather, it is simply one control among many. In this chapter, we look at encryptions history, its challenges, and its role in security architecture.

Cryptography is a science that applies complex mathematics and logic to design strong encryption methods. Achieving strong encryption, the hiding of datas meaning, also requires intuitive leaps that allow creative application of known or new methods. So cryptography is also an art.

The driving force behind hiding the meaning of information was war. Sun Tzu wrote,

Of all those in the army close to the commander none is more intimate than the secret agent; of all rewards none more liberal than those given to secret agents; of all matters none is more confidential than those relating to secret operations.

Secret agents, field commanders, and other human elements of war required information. Keeping the information they shared from the enemy helped ensure advantages of maneuver, timing, and surprise. The only sure way to keep information secret was to hide its meaning.

Early cryptographers used three methods to encrypt information: substitution, transposition, and codes.

One of the earliest encryption methods is the shift cipher. A cipher is a method, or algorithm, that converts plaintext to ciphertext. Caesars shift cipher is known as a monoalphabetic substitution shift cipher. See Figure 7-1.

Figure 7- 1: Monoalphabetic Substitution Shift Cipher

The name of this cipher is intimidating, but it is simple to understand. Monoalphabetic means it uses one cipher alphabet. Each character in the cipher alphabettraditionally depicted in uppercaseis substituted for one character in the plaintext message. Plaintext is traditionally written in lowercase. It is a shift cipher because we shift the start of the cipher alphabet some number of letters (four in our example) into the plaintext alphabet. This type of cipher is simple to use and simple to break.

In Figure 7-1, we begin by writing our plaintext message without spaces. Including spaces is allowed, but helps with cryptanalysis (cipherbreaking) as shown later. We then substitute each character in the plaintext with its corresponding character in the ciphertext. Our ciphertext is highlighted at the bottom.

Looking at the ciphertext, one of the problems with monoalphabetic ciphers is apparent: patterns. Note the repetition of O and X. Each letter in a language has specific behavior, or socialization, characteristics. One of them is whether it is used as a double consonant or vowel. According to Mayzner and Tresselt (1965), the following is a list of the common doubled letters in English.

LL EE SS OO TT FF RR NN PP CC

In addition to doubling, certain letter pairs commonly appear in English text:

TH HE AN RE ER IN ON AT ND ST ES EN OF TE ED OR TI HI AS TO

Finally, each letter appears in moderate to long text with relative frequency. According to Zim (1962), the following letters appear with diminishing frequency. For example, e is the most common letter in English text, followed by t, etc.

ETAON RISHD LFCMU GYPWB VKXJQ Z

Use of letter frequencies to break monoalphabetic ciphers was first documented by Abu Yusuf Yaqub ibnis-haq ibn as-Sabbath ibn om-ran ibn Ismail al-Kindi in the ninth century CE (Singh, 1999).al-Kindi did what cryptanalysts (people to try to break the work of cryptographers) had been trying to do for centuries: develop an easy way to break monoalphabetic substitution ciphers. Once the secret spread, simple substitution ciphers were no longer safe. The steps are

Eventually, this frequency analysis begins to reveal patterns and possible words. Remember that the letters occur with relative frequency. So this is not perfect. Letter frequency, for example, differs between writers and subjects. Consequently, using a general letter frequency chart provides various results depending on writing style and content. However, by combining letter socialization characteristics with frequency analysis, we can work through inconsistency hurdles and arrive at the hidden plaintext.

Summarizing, monoalphabetic substitution ciphers are susceptible to frequency and pattern analysis. This is one of the key takeaways from this chapter; a bad cipher tries to hide plaintext by creating ciphertext containing recognizable patterns or regularly repeating character combinations.

Once al-Kindi broke monoalphabetic ciphers, cryptographers went to work trying to find a stronger cipher. Finally, in the 16th century, a French diplomat developed a cipher that would stand for many decades (Singh, 1999). Combining the work and ideas of Johannes Trithemius, Giovanni Porta, and Leon Battista Alberti, Blaise de Vigenre created the Vigenre cipher.

Vigenres cipher is based on a Vigenre table, as shown in Figure 7-2. The table consists of 27 rows. The first row of lower case letters represents the plaintext characters. Each subsequent row represents a cipher alphabet. For each alphabet, the first character is shifted one position farther than the previous row. In the first column, each row is labeled with a letter of the alphabet. In some tables, the letters are replaced with numbers representing the corresponding letters position in the standard alphabet. For example, A is replaced with 1, C with 3, etc.

Figure 7- 2: Vigenre Table

A key is required to begin the cipher process. For our example, the key is FRINGE. The message we wish to encrypt is get each soldier a meal.

Write the key above the message so that each letter of the key corresponds to one letter in the message, as shown below. Repeat the key as many times as necessary to cover the entire message

MWCSHHNKXZKNKJJALFR

Figure 7- 3: Selection of Table Rows Based on Key

Our encrypted message used six cipher alphabets based on our key. Anyone with the key and the layout of the table can decrypt the message. However, messages encrypted using the Vigenre cipher are not vulnerable to frequency analysis. Our message, for example, contains four es as shown in red below. A different cipher character represents each instance of an e. It is not possible to determine the relative frequency of any single letter. However, it is still vulnerable to attack.

MWCSHHNKXZKNKJJALFR

Although slow to gain acceptance, the Vigenre cipher was a very strong and seemingly unbreakable encryption method until the 19th century. Charles Babbage and Friedrich Wilhelm Kasiski demonstrated in the mid and late 1800s respectively that even polyalphabetic ciphers provide trails for cryptanalysts. Although frequency analysis did not work, encrypted messages contained patterns that matched plaintext language behaviors. Once again, a strong cipher fell because it could not distance itself from the characteristics of the plaintext language.

Other attempts to hide the meaning of messages included rearranging letters to obfuscate the plaintext: transposition. The rail fence transposition is a simple example of this technique. See Figure 7-4. The plaintext, giveeachsoldierameal, is written with every other letter on a second line. To create the ciphertext, the letters on the first line are written first and then the letters on the second. The resulting cipher text is GVECSLIRMAIEAHODEAEL.

Figure 7- 4: Rail Fence Transposition

The ciphertext retains much of the characteristic spelling and letter socialization of the plaintext and its corresponding language. Using more rows helped, but complexity increased beyond that which was reasonable and appropriate.

In addition to transposition ciphers, codes were also common prior to use of contemporary cryptography. A code replaces a word or phrase with a character. Figure 7-5 is a sample code. Using codes like our example was a good way to obfuscate meaning if the messages are small and the codebooks were safe. However, using a codebook to allow safe communication of long or complex messages between multiple locations was difficult.

Figure 7- 5: Code Table

The first challenge was creating the codes for appropriate words and phrases. Codebooks had to be large, and the effort to create them was significant: like writing an English/French dictionary. After distribution, there was the chance of codebook capture, loss, or theft. Once compromised, the codebook was no longer useful, and a new one had to be created. Finally, coding and decoding lengthy messages took time, time not available in many situations in which they were used.

Codes were also broken because of characteristics inherent in the plaintext language. For example, and, the, I, a, and other frequently occurring words or letters could eventually be identified. This provided the cryptanalysts with a finger hold from which to begin breaking a code.

To minimize the effort involved in creating and toting codebooks, cryptographers in the 16th century often relied on nomenclators. A nomenclator combines a substitution cipher with a small code set, as in the famous one shown in Figure 7-6. Mary Queen of Scots and her cohorts used this nomenclator during a plot against Queen Elizabeth I (Singh, 1999). Thomas Phelippes (cipher secretary to Sir Francis Walsingham, principal secretary to Elizabeth I) used frequency analysis to break it. Phelippes success cost Queen Mary her royal head.

Figure 7- 6: Nomenclator of Mary Queen of Scots (Singh, 1999, loc. 828)

Between the breaking of the Vigenre cipher and the 1970s, many nations and their militaries attempted to find the unbreakable cipher. Even Enigma fell to the technology-supported insights of Marian Rejewski and Alan Turing. (If you are interested in a good history of cryptography, including transposition ciphers and codes, see The Code Book by Simon Singh.)

Based on what we learn from the history of cryptography, a good cipher

makes it impossible to find the plaintext m from ciphertext c without knowing the key. Actually, a good encryption function should provide even more privacy than that. An attacker shouldnt be able to learn any information about m, except possibly its length at the time it was sent (Ferguson, Schneier, & Kohno, 2010, p. 24).

Achieving this ideal requires that any change to the plaintext, no matter how small, must produce a drastic change in the ciphertext, such that no relationship between the plaintext and the resulting ciphertext is evident. The change must start at the beginning of the encryption process and diffuse throughout all intermediate permutations until reaching the final ciphertext. Attempting to do this before the late 20th century, and maintain some level of business productivity, was not reasonable. Powerful electronic computers were stuff of science fiction. Today, we live in a different world.

The standard cipher in use today is the Advanced Encryption Standard (AES). It is a block cipher mode that ostensibly meets our definition of an ideal cipher. However, it has already been broken on paper. AES is a symmetric cipher, meaning that it uses a single key for encryption and decryption. Cryptanalysts have theoretically broken it, but we need better computers to test the discovered weaknesses. It will be some time before private industries have to worry about changing their encryption processes.

A block cipher mode features the use of a symmetric key block cipher algorithm (NIST, 2010). Figure 7-7 depicts a simple block cipher. The plaintext is broken into blocks. In todays ciphers, the block size is typically 128 bits. Using a key, each block passes through the block algorithm resulting in the final ciphertext. One of the problems with this approach is lack of diffusion. The same plaintext with the same key produces the same ciphertext. Further, a change in the plaintext results in a corresponding and identifiable change in the ciphertext.

Figure 7- 7: Simple Block Cipher (Electronic codebook, 2012)

Because of the weaknesses in simple block algorithms, cryptographers add steps to strong ciphers. Cipher block chaining (CBC), for example, adds diffusion by using ciphertext, an initialization vector, and a key. Figure 7-8 graphically depicts the encipher process ( = XOR). The initialization vector (IV) is a randomly generated and continuously changing set of bits the same size as the plaintext block. The resulting ciphertext changes as the IV changes. Since the key/IV pair should never be duplicated, the same plaintext can theoretically pass through the cipher algorithm using the same key and never produce the same ciphertext.

Figure 7- 8: Cipher-block Chaining Cipher Mode (Cipher-block chaining, 2012)

When the CBC cipher begins, it XORs the plaintext block with the IV and submits it to the block algorithm. The algorithm produces a block of ciphertext. The ciphertext from the first block is XORed with the next block of plaintext and submitted to the block algorithm using the same key. If the final block of plaintext is smaller than the cipher block size, the plaintext block is padded with an appropriate number of bits. This is stronger, but it still fell prey to skilled cryptanalysts.

AES, another block cipher mode, uses a more sophisticated approach, including byte substitution, shifts, column mixing, and use of cipher-generated keys for internal processing (NIST, 2001). It is highly resistant to any attack other than key discovery attempts. However, cryptanalysts have theoretically broken AES (Ferguson, Schneier, & Kohno, 2010). This does not mean it is broken in practice; it is still the recommended encryption method for strong data protection.

For additional information on attacks against modern ciphers, see Cryptography Engineering: Design Principles and Practical Applications by Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno.

The processes underlying all widely accepted ciphers are and should be known, allowing extensive testing by all interested parties: not just the originating cryptographer. We tend to test our expectations of how our software development creations should work instead of looking for ways they deviate from expected behavior. Our peers do not usually approach our work in that way. Consequently, allowing a large number of people to try to break an encryption algorithm is always a good idea. Secret, proprietary ciphers are suspect. A good encryption solution follows Auguste Kerckhoffs principle:

The security of the encryption scheme must depend only on the secrecy of the key and not on the secrecy of the algorithm (Ferguson, Schneier, & Kohno, 2010, p. 24)

If a vendor, or one of your peers, informs you he or she has come up with a proprietary, secret cipher that is unbreakable, that person is either the foremost cryptographer of all time or deluded. In either case, only the relentless pounding on the cipher by cryptanalysts can determine its actual strength.

Now that we have established the key as the secret component of any well-tested cipher, how do we keep our keys safe from loss or theft? If we lose a key, the data it protects is effectively lost to us. If a key is stolen, the encrypted data is at higher risk of discovery. And how do we share information with other organizations or individuals if they do not have our key?

AES is a symmetric cipher; it uses the same key for both encryption and decryption. So, if I want to send AES-encrypted information to a business partner, how do I safely send the key to the receiver?

Managing keys requires three considerations:

Many organizations store key files on the same system, and often the same drive, as the encrypted database or files. While this might seem like a good idea if your key is encrypted, it is bad security. What happens if the system fails and the key is not recoverable? Having usable backups helps, but backup restores do not always work as planned

Regardless of where you keep your key, encrypt it. Of course, now you have to decide where to store the encryption key for the encrypted encryption key. None of this confusion is necessary if you store all keys in a secure, central location. Further, do not rely solely on backups. Consider storing keys in escrow, allowing access by a limited number of employees (key escrow, n.d.). Escrow storage can be a safe deposit box, a trusted third party, etc. Under no circumstances allow any one employee to privately encrypt your keys.

Encrypted keys protecting encrypted production data cannot be locked away and only brought out by trusted employees as needed. Rather, keep the keys available but safe. Key access security is, at its most basic level, a function of the strength of your authentication methods. Regardless of how well protected your keys are when not used, authenticated users (including applications) must gain access. Ensure identity verification is strong and aggressively enforce separation of duties, least privilege, and need-to-know.

Most, if not all, attacks against your encryption will try to acquire one or more of your keys. Use of weak keys or untested/questionable ciphers might achieve compliance, but it provides your organization, its customers, and its investors with a false sense of security. As Ferguson, Schneier, and Kohno (2010) wrote,

In situations like this (which are all too common) any voodoo that the customer [or management] believes in would provide the same feeling of security and work just as well (p. 12).

So what is considered a strong key for a cipher like AES? AES can use 128-, 192-, or 256-bit keys. 128-bit keys are strong enough for most business data, if you make them as random as possible. Key strength is measured by key size and an attackers ability to step through possible combinations until the right key is found. However you choose your keys, ensure you get as close as possible to a key selection process in which all bit combinations are equally likely to appear in the key space (all possible keys).

It is obvious from the sections on keys and algorithms that secrecy of the key is critical to the success of any encryption solution. However, it is often necessary to share encrypted information with outside organizations or individuals. For them to decrypt the ciphertext, they need our key.

Transferring a symmetric cipher key is problematic. We have to make sure all recipients have the key and properly secure it. Further, if the key is compromised in some way, it must be quickly retired from use by anyone who has it. Finally, distribution of the key must be secure. Luckily, some very smart cryptographers came up with the answer.

In 1978, Ron Rivest, Adi Shamir, and Leonard Adelman (RSA) publicly described a method of using two keys to protect and share data; one key is public and the other private. The organization or person to whom the public key belongs distributes it freely. However, the private key is kept safe and is never shared. This enables a process known as asymmetric encryption and decryption.

As shown in Figure 7-9, the sender uses the recipients public key to convert plaintext to ciphertext. The ciphertext is sent and the recipient uses her private key to recover the plaintext. Only the person with the private key corresponding to the public key can decrypt the message, document, etc. This works because the two keys, although separate, are mathematically entwined.

Figure 7- 9: Asymmetric Cryptography (Microsoft, 2005)

At a very high level, the RSA model uses prime numbers to create a public/private key set:

There is more to asymmetric key creation, but this is close enough for our purposes.

When someone uses the public key, or the product of the two primes, to encrypt a message, the recipient of the ciphertext must know the two prime numbers that created it. If the primes were small, a brute force attack can find them. However, use of extremely large primes and todays computing power makes finding the private key through brute force unlikely. Consequently, we can use asymmetric keys to share symmetric keys, encrypt email, and various other processes where key sharing is necessary.

The Diffie-Hellman key exchange method is similar to the RSA model and it was made public first. However, it allows two parties who know nothing about each other to establish a shared key. This is the basis of SSL and TLS security. An encrypted session key exchange occurs over an open connection. Once both parties to the session have the session key (also know as a shared secret), they establish a virtual and secure tunnel using symmetric encryption.

So why not throw out symmetric encryption and use only asymmetric ciphers? First, symmetric ciphers are typically much stronger. Further, asymmetric encryption is far slower. So we have settled for symmetric ciphers for data center and other mass storage encryption and asymmetric ciphers for just about everything else. And it works for now.

Although not really encryption as we apply the term in this chapter, the use of asymmetric keys has another use: digital signatures. If Bob, for example, wants to enable verification that he actually sent a message, he can sign it.

Refer to Figure 7-10. The signature process uses Bobs private key, since he is the only person who has it. The private key is used as the message text is processed through a hash function. A hash is a fixed length value that represents the message content. If the content changes, the hash value changes. Further, an attacker cannot use the hash value to arrive at the plain text.

Figure 7- 10: Digital Signing (Digital signature, 2012)

When Alice receives Bobs message, she can verify the message came from Bob and is unchanged: if she has Bobs public key. With Bobs public key, she rehashes the message text. If the two hash values are the same, the signature is valid, and the data reached Alice unchanged.

If hash values do not match, either the message text changed or the key used to create the signature hash value is not Bobs. In some cases, the public key might not be Bobs. If attacker, Eve, is able to convince Alice that a forged certificate she sends to her is Bobs key, Eve can send signed messages using a forged Bob key that Alice will verify. It is important for a recipient to be sure the public key used in this process is valid.

Verifying the authenticity of keys is critical to asymmetric cryptography. We have to be sure that the person who says he is Bob is actually Bob or that the bank Web server we access is actually managed by our bank. There are two ways this can happen: through hierarchical trust or a web of trust.

Private industry usually relies on the hierarchical chain-of-trust model that minimally uses three components:

The CA issues certificates binding a public key to a specific distinguished name provided by the certificate applicant (subject). Before issuing a certificate, however, it validates the subjects identity. One verification method is domain validation. The CA sends an email containing a token or link to the administrator responsible for the subjects domain. The recipient address might take the form of postmaster@domainname or root@domainname. The recipient (hopefully the subject or the subjects authorized representative) then follows verification instructions.

Another method, and usually one with a much higher cost for the requestor, is extended validation (EV). Instead of simple administrator email exchange, a CA issuing an EV steps through a rigorous identity verification process. The resulting certificates are structurally the same as other certificates; they simply carry the weight of a higher probability that the certificate holder is who they say they are, by

A simple certificate issuance process is depicted in Figure 7-11. It is the same whether you host your own CA server or use a third party. The subject (end-entity) submits an application for a signed certificate. If verification passes, the CA issues a certificate and the public/private key pair. Figure 7-12 depicts the contents of my personal VeriSign certificate. It contains identification of the CA, information about my identity, the type of certificate and how it can be used, and the CAs signature (SHA1 and MD5 formats).

Figure 7- 11: PKI (Ortiz, 2005)

The certificate with the public key can be stored in a publicly accessible directory. If a directory is not used, some other method is necessary to distribute public keys. For example, I can email or snail-mail my certificate to everyone who needs it. For enterprise PKI solutions, an internal directory holds all public keys for all participating employees.

Figure 7- 12: Personal Certificate

The hierarchical model relies on a chain of trust. Figure 7-13 is a simple example. When an application/system first receives a subjects public certificate, it must verify its authenticity. Because the certificate includes the issuers information, the verification process checks to see if it already has the issuers public certificate. If not, it must retrieve it. In this example, the CA is a root CA and its public key is included in its root certificate. A root CA is at the top of the certificate signing hierarchy. VeriSign, Comodo, and Entrust are examples of root CAs.

See the article here:
Chapter 7: The Role of Cryptography in Information Security ...

Related Posts
This entry was posted in $1$s. Bookmark the permalink.