Cryptographic Building Blocks
Every system that is established can be hacked or attacked. Each different hack or attack represents a distinct threat against the system. For every threat a threat analysis is done to determine the viability of that threat and what damage can be done if that threat is acted upon. Depending on the threat analysis countermeasures are taken such that the cost of launching the attack is greater than the expected gain from the attack.
Cryptographic tools represent such countermeasures. There is no single cryptographic tool. There are various techniques for encrypting messages, for securely exchanging keys, for maintaining the integrity of messages, and for guaranteeing authenticity of a message. These tools can be thought of as building blocks to construct protection against attack.
A single cryptographic building block solves a particular problem—how to authenticate bulk data, how to establish a shared secret—and they can be combined to build a cryptosystem to protect against threats. The cryptosystem must be stronger than the threat against it.
Generally, the strength of a cryptosystem is measured in its complexity. If 2^{32} separate operations are required to break a cryptosystem then the complexity of a particular system is 2^{32}. That's a lot of operations, but if each operation can be performed by a modern computer in hundredths or thousandths of a second, the system might not be strong enough to protect against the threat. Because of this the term computationally secure is used to express the security of a modern cryptosystem.
When building a cryptosystem it is necessary to ensure that the component building blocks are used properly and together maintain the necessary strength. For instance, if the strength of the building block used to establish a shared secret is 2^{90} but the strength of the building block used to encrypt the data is only 2^{40} the cryptosystem would be 2^{40}, and that is not computationally secure using modern computers.
OneWay Functions and Trap Doors
A good portion of public key cryptography relies upon a foundation of oneway functions and trapdoors. A oneway function is something that is easy to compute in one direction but difficult, bordering on impossible, to compute in the other direction. A trapdoor is a way to sneak back, in effect a way to cheat and return using a secret passage.
For a oneway function to be useful in cryptography it must exhibit its one wayness with any input. For example, in a finite field it is easy to compute the product of numbers but difficult to factor that product.
Another example is the Discrete Logarithm Problem: with a large prime, p, and a generator, g, for a particular value y, find x where
g^{x} = y mod p
Modular exponentiation is easy, but doing a discrete logarithm to recover the exponent is hard. For any class of numbers—odd numbers, palidrome numbers, numbers divisible by 47—the problem of solving the discrete logarithm is still very hard.
There are no mathematical proofs of oneway functions but certain functions seem to have the properties that a oneway function would have and are generally referred to as such. There may be ways to factor numbers that are just as fast and easy as producing the product but no one has discovered it yet. Because of that we can put our knowledge on the difficulty in factoring to good use.
Trapdoor functions are a bit harder to explain. Modern cryptographic algorithms use them but it's hard to point to a particular one and say, "that's it!" An example of a trapdoor function is a tree with many branches. To get from a leaf to the trunk is straightforward and requires no choices. To get from the trunk back out to a particular leaf requires choosing a branch, then a subbranch, then another subbranch, et cetera, and finally choosing the leaf. The trapdoor would be a description of which branch to take.
OneWay Hash Functions
Oneway hash functions are used in modern cryptosystems for authentication and integrity purposes. A oneway hash function is different than the concept of a oneway function just described. Hash functions take a variablesized message as input, compress it, and produce a fixedsized digest. The output of a hash function will be identical for identical input. Since the output is fixed for any length input it should be obvious that there will exist two distinct inputs, X and Y, for a hash algorithm H, such that H(X) equals H(Y). Such an occurrence is called a collision. Oneway hash functions are designed such that finding collisions—that is, finding two random inputs that will produce identical hash digests—is difficult.
Popular hash functions in use today are: MD5 (Message Digest 5), SHA (the Secure Hash Algorithm), and RIPEMD. They all produce a differentsized digest and have different speed and collisionresistant properties, but are all used extensively today.
Use of oneway functions, which are based on a trapdoor, are much more computationally intensive than using oneway hash functions. Guaranteeing the integrity of a message using a oneway function with a trapdoor—such as a digital signature scheme—takes considerably more time than guaranteeing the integrity of the message using a hash function. There are situations, though, in which it is not possible to use a oneway hash function. In later chapters you will see how IPSec and IKE use both techniques.
Another technique used quite a bit is the simple exclusiveor (XOR) function. This is neither a oneway function, nor a trapdoor function, but is, nonetheless, a useful tool in building cryptographic systems. Remember from early math classes that the XOR of two zeros is zero, the XOR of two ones is zero and the XOR of a zero and a one (or a one and a zero) is one. XOR has a very important feature that it is commutative. Taking any data and XORing it with a key of the same size (one bit, one byte, or more) will produce an output that can be XORed with the key again to recover the original data. It is the most simplistic "encryption" algorithm. Note, however, that knowing either input and the output it is possible to deduce the other input. This is not generally a characteristic of a real encryption algorithm and illustrates the weakness of using XOR for such a purpose.
Ciphers
Data confidentiality is provided by encryption algorithms which convert a message (plaintext) into gibberish (ciphertext) and back again. Some encryption algorithms are symmetric—the ability to encrypt implies the ability to decrypt—while some are asymmetric—without the use of a trapdoor it is not possible to decrypt what has been encrypted. Asymmetric algorithms are treated not as two separate functions (one for encryption and one for decryption) but as a single algorithm. So, regardless of the "symmetry" of a particular algorithm, encryption algorithms are commutative.
plaintext = Decrypt(Encrypt(plaintext))
This should be most obvious because any algorithm that permanently scrambled its input would be secure but of little use.
Symmetric Ciphers
Symmetric ciphers use a single key to do both encryption and decryption. There are two types of symmetric ciphers, block ciphers and stream ciphers. Block ciphers, such as AES, CAST, and Blowfish, operate on data one block at a time, with the size of the block depending on the algorithm (AES has a 128bit block size while both CAST and Blowfish have a 64bit block size). Each block operation is treated as an atomic act. Stream ciphers, such as RC4, on the other hand operate on data one bit (or one byte) at a time. Appropriately seeded with a key, they will produce a stream of bits which can be XORed with the input. The encryptor and the decryptor must be syncronized to ensure that the same bit in the stream used to encrypt a particular bit of plaintext is also used to decrypt the corresponding bit of ciphertext. If the two ever get out of syncronization the plaintext will not be able to be recovered. It is this syncronization problem that makes stream ciphers inappropriate for use with IPSec. If a packet is dropped using a block cipher that will not affect the processing of subsequent packets, but if a packet is dropped using a stream cipher all subsequent packets will be affected until the two side resynchronize somehow.
Both types of symmetric ciphers are ideally suited for bulk encryption. Since block ciphers are used exclusively in IPSec, the reader is referred to the literature for an indepth description of stream ciphers.
Block ciphers process data by first dividing it up into equal sized chunks. The size of each chunk is determined by the block size of the cipher. Since there is no guarantee that the length of the input is a multiple of the block size of a block cipher, it may be necessary to pad the input. If the block size is 64 bits and the last block of input is only 48 bits, it may be necessary to add 16 bits of padding to the block prior to performing the encryption (or decryption) operation.
The basic way to use a block cipher is in Electronic Code Book (ECB) mode. Each block of plaintext encrypts to a block of ciphertext. This causes problems though since the same block of plaintext will encrypt, with the same key, into the same block of ciphertext. Therefore it is possible to build a code book of all possible ciphertexts (using all possible keys) for a known plaintext. If we know that an IP datagram was encrypted, we know that the first 20 bytes of ciphertext represent the IP header and that certain fields of an IP header are predictable. An attacker can use that knowledge, with a code book, to determine the key.
To foil the code book attack against a block cipher it is necessary to use the block cipher in a feedback mode. A feedback mode chains blocks together by feeding the results of prior operations into the current operation.
Cipher Block Chaining (CBC) mode takes the previous block of ciphertext and XORs it with the next block of plaintext prior to encryption. There is no "previous block" for the first block so this mode is jumpstarted by XORing the first block with something called an Initialization Vector (IV). The length of the IV must be the same as the block size of the cipher to ensure the entire first block is processed. The IV must have strong pseudorandom properties to ensure that identical plaintext will not produce identical ciphertext. Decryption is the opposite of encryption: Each block is decrypted and XORed with the previous block prior to decryption. The first block is decrypted and XORed with the IV. All ciphers currently defined for use in IPSec are block ciphers operating in CBC mode.
Asymmetric Ciphers
Asymmetric algorithms are also known as public key algorithms. There are two keys, one public and one private. One key does the encryption, the other the decryption, and given a public key it is computationally impossible to determine the private key (as defined above, we can say that good public key algorithms are computationally secure). Good public key algorithms are based on oneway functions.
Public key cryptography is generally held to have been invented by Whitfield Diffie and Martin Hellman in their paper "New Directions in Cryptography," published in IEEE Transactions on Information Theory in 1976. Recently the CommunicationsElectronics Security Group (CESG) of the British government—the UK version of the United States' NSA— declassified some papers that showed that their cryptanalysts had actually invented the concept six years earlier. In 1970, James Ellis wrote an internal CESG report entitled "The Possibility of Secure NonSecret Digital Encryption" which discussed an existence theorem, while Clifford Cocks and Malcolm Williamson wrote papers describing practical schemes that closely resemble the RSA and DiffieHellman schemes, respectively. Regardless, publication of the DiffieHellman paper was a seminal event whose importance is underscored by the nearly 20year delay in release of the classified British papers. It is not beyond the realm of possibility that if "New Directions in Cryptography" had not been published, this knowledge would still be a classified secret known only to a few.
RSA
The most popular public key algorithm is RSA, named after its inventors Ron Rivest, Adi Shamir, and Leonard Adleman. The security of RSA is based on the difficulty in factoring the product of two very large prime numbers. This is a oneway function: it is easy to compute the product of two large prime numbers but extremely difficult to factor the product into the original prime numbers. One of the features of RSA is that either key can be used to encrypt data that the other key can decrypt. This means that anyone can encrypt a message in your public key that you alone can decrypt. Also, you can encrypt anything with your private key that anyone with your public key can decrypt. You're probably thinking, what's the point then? But this concept is very important in nonrepudiation and digital signatures (which will be discussed shortly).
A drawback of RSA is that it is quite slow and can operate only on data up to the size of the modulus of its key. A 1024bit RSA public key can only encrypt data that is less than or equal to that size (actually, it's 1013 bits because the definition on how to encrypt using RSA requires an encoding that consumes 11 bits). While this is a restriction similar to a symmetric block cipher, the speed of RSA makes it unsuitable for bulk data encryption. This does not mean that RSA is not useful. On the contrary, it is a de facto standard for such important techniques as key exchange and digital signature.
ElGamal
Another public key cryptosystem which is suitable for encryption is ElGamal, named after its inventor, Taher ElGamal. The ElGamal cryptosystem is based on the Discrete Logarithm Problem. The main drawback of ElGamal is that the ciphertext is twice the size of the plaintext. Given our already saturated networks, this is a large drawback. ElGamal is quite similar to the DiffieHellman key exchange, which we'll discuss in detail shortly.
Authentication and Integrity
Confidentiality is necessary to keep a secret, but without authentication you have no way of knowing that the person with whom you share the secret is whom she claims to be. And with no confidence in the integrity of a received message, you don't know if it was the same message actually sent..
Authentication
Public key cryptography can be used for authentication purposes by constructing a socalled digital signature which has properties similar to a traditional signature. A traditional handwritten signature is difficult to forge, and is therefore difficult to repudiate. But because a handwritten signature is just more writing on a document, it is possible (although also difficult given a wellwritten document) for unscrupulous people to add additional text to an already signed document, giving the impression that the signer agrees to or acknowledges that text.
The Internet is a largely anonymous place and digital information can live a long time, so there are other properties we need for digital signatures in addition to those that a traditional handwritten signature affords.
A digital signature must be difficult to forge and therefore difficult to repudiate, just like a traditional signature. In addition, it must convey message integrity and must be unique. We want to prevent additional text from being added to a digitally signed file and we also want to prevent a signature from being removed from an authentic, signed document and added to other documents. These properties can all be met using public key cryptography.
It is easiest to envision digital signature as encryption and verification of a digital signature as decryption. In fact, that is the way an RSA signature works. But another public key algorithm, in fact a standard for digital signatures, aptly named the Digital Signature Standard (DSS), does not operate in that manner. The difference will be explained shortly, but for purposes of illustration it is encryption and decryption.
What the private key encrypts the public key decrypts. Provided the private key from a public/private key cryptosystem is kept secret, it can be used to construct digital signatures. By encrypting a document with a private key, anybody in possession of the corresponding public key can decrypt the document. Of course an encrypted document is hardly a signature and verification would just entail reconstruction of something that looks good out of the encrypted gibberish. It would also require decryption, and implicit signature verification, every time the document merely needs to be read.
A digital signature is therefore not a privatekey encryption of the entire document. Digital signature techniques use oneway hash functions to reduce a document down to a digest. It is that digest that is encrypted. Remember that a hash function will produce the same digest every time it is given identical input and that the input can be of arbitrary length. Provided the hash function has strong collisionresistant properties, we can be assured that the signature is unique to the document.
The encrypted digest, the digital signature, can then be appended to an original document. Verification of the signature entails running the original document through the identical hash function to product a temporary digest and decrypting the signature to recover the original digest. If the two digests are equal, the signature is valid. This technique has all the properties we need:

difficult to forge: only the holder of the private key can generate the signature.

nonrepudiable: a signed document cannot be repudiated later due to extreme difficulty in forging.

unalterable: once signed, a document cannot be modified.

nontransferable: the signature cannot be removed and attached to another document.
It is also possible to have multiple signatures, produced from different private keys, on a single document. Each signature is generated in the same fashion by encrypting a digest of the document to be signed. These encrypted digests are merely appended, one after the other, on the end of the document.
RSA
Due to its unique nature—what one key encrypts the other decrypts—RSA is well suited for digital signatures as well as for encryption. You just use a different key to do the encryption! The technique described previously is exactly what happens when using RSA with digital signatures.
There are no requirements to use any particular hash algorithm when using RSA signatures.
DSA
The digital signature algorithm is similar to the ElGamal public key scheme. Both are based on the discrete logarithm problem.
As mentioned, the Digital Signature Algorithm does not actually do encryption for signature generation and decryption for signature verification (although it does have a public and private key). Instead, the private key is used to generate two 160bit values which represent the signature, and verification is a mathematical demonstration, using the public key, that those two values could only have been generated by the private key and the document that was signed. There is no real "decryption".
DSA requires use of SHA as a hash function for signatures. SHA is the algorithm defined in the U.S. government Federal Information Processing Standard (FIPS) for the Secure Hash Standard and was therefore selected to use for another FIPS, the Digital Signature Standard, of which DSA is the algorithm.
Message Integrity
A digital signature provides integrity on the signed document. Any modification to the document would be detected by checking the signature. One drawback of digital signatures is that they are slow and another is that the entire message must be known prior to signature generation. There is no efficient way to provide message integrity of an ongoing data stream using digital signatures.
Just as there are symmetric and asymmetric ciphers, there are symmetric and asymmetric methods of guaranteeing message integrity. Similar to symmetric ciphers, where one single key is used for both encryption and decryption, symmetric message authentication codes (MACs) use a single key for generating and verifying the authentication information. (MACs are sometimes erroneously referred to as signatures—they're not.)
Hash functions are used as MACs just as they are in digital signatures. Since the input to a hash function can be of any length, all one needs to do to generate a MAC is hash a shared secret key along with the message. The resulting digest is attached to the message, and verification of the MAC entails hashing the shared secret key with the message to produce a temporary digest and comparing that temporary digest with the digest attached to the message. This technique is referred to as keyed hashing. It's important to do keyed hashing because just performing a hash on some data does not really provide any authentication. Anybody could modify the data and merely run the hash algorithm over the modified data. A hash function alone is like a checksum, a keyed hash function is a MAC.
Keyed hashing can be used to provide message authentication to a stream of data by dividing the stream into easily digestible chunks and computing a MAC on each chunk. Those MACs then become part of the stream and are used to verify the integrity of the stream as it is received. Another benefit of keyed hashing is that generation of a hash digest is much faster than generation of a digital signature.
A special kind of keyed hash is called an HMAC, and was designed by Hugo Krawczyk, Ran Canetti, and Mihir Bellare. The HMAC specification is in RFC2104 and can be utilized with any existing hash function, so SHA can become HMACSHA and MD5 becomes HMACMD5. The HMAC construction is cryptographically stronger than the underlying hashing function. There has recently been a demonstrated collision attack against MD5 (where it is possible to find two different inputs which will produce the same digest), but HMACMD5 is not susceptible to this attack.
An HMAC is also a keyed hash but is actually a keyed hash inside a keyed hash. It uses two constant pad values—an inner pad and an outer pad—to modify the keys to the hashes. The HMAC based on hash algorithm H of message M using key K is defined as
HMAC (K, M) = H(K XOR opad, H(K XOR ipad, M))
Where the ipad is a 64element array of the value 0x36 and the opad is a 64element array of the value 0x5c.
All message authentication done in IPSec uses HMACs.
Key Exchanges
Symmetric ciphers and symmetric MACs both require a shared key. The security of the encryption and authentication techniques could be completely undermined by an insecure key exchange.
DiffieHellman
The DiffieHellman key exchange is the first public key cryptosystem and was the one described in the aforementioned paper "New Directions in Cryptography" by Whitfield Diffie and Martin Hellman. The DiffieHellman key exchange is based on the Discrete Logarithm Problem (notice how often this oneway function is used).
This key exchange is extremely important. Using the DiffieHellman exchange, a nonsecret, untrusted communications channel (like the Internet) can be used to securely establish a shared secret among the parties of the exchange. It is because of the DiffieHellman key exchange that symmetric ciphers and symmetric message integrity schemes (which both require a shared key) can be used in a scalable manner.
The usual players in describing modern cryptography are Alice and Bob and they can be used to illustrate the DiffieHellman exchange. All participants in a DiffieHellman exchange must first agree on a group that defines which prime, p, and generator, g, will be used. A DiffieHellman exchange is twopart. In the first part each side, Alice and Bob, choose a random private number (indicated by the lowercase initial of the party) and exponentiate in the group to produce a public value (uppercase initial of the party):
Alice  Bob 
A= g^{a} mod p  B = g^{b} mod p 
They exchange their public values, Alice gives A to Bob and Bob gives B to Alice, and they exponentiate again, using the other party's public value as the generator, to generate shared secret.
Alice  Bob 
B^{a} mod p = g^{ab}  mod p = A^{b} mod p 
Notice that A and B can be exchanged over an insecure network without lessening the security of the scheme. g and p do not even need to be kept secret. An eavesdropper (she's usually referred to as Eve) could know g and p a priori, intercept A and B over the insecure channel and still not be able to discover the secret! Once Alice and Bob share a secret they can use it to protect their communications. The DiffieHellman exchange allows an insecure channel to become secure. The importance of this cannot be overstated.
One drawback of the DiffieHellman exchange is that it is susceptible to a maninthemiddle attack. In this attack, Mallory intercepts messages between Alice and Bob and fraudulently responds impersonating Bob to Alice and Alice to Bob. Alice thinks she's doing a DiffieHellman exchange with Bob but she's really doing with to Mallory. Similarly Bob thinks he's doing a DiffieHellman exchange with Alice but he's also doing it with Mallory. Alice can then send Bob secret information protected with the shared secret she thinks she shares with Bob. Mallory can decrypt it, copy it, and reencrypt it with the secret that Bob has (which he thinks is shared with Alice). Neither Alice nor Bob detect anything out of the ordinary, except perhaps some delay in delivery due to Mallory's involvement.
The susceptibility to maninthemiddle attack does not render the DiffieHellman exchange useless though, because the attack can be thwarted by having Alice and Bob digitally sign their public values. Mallory will not be able to fool Bob into signing her public value and will not be able to make Alice think that her signature is in fact Bob's.
RSA Key Exchange
With the RSA cryptosystem it is possible to encrypt with either the public or private key and what one key encrypts the other can decrypt. This capability can be put to use for doing a simplistic key exchange. If Alice wishes to use symmetric cryptography to protect her communications with Bob, she can choose a random number as the key, encrypt it in Bob's public key, and send it to him. Only Bob will be able to decrypt the key since he, alone, has possession of his private key.
An obvious problem with this approach is that anybody—such as Mallory— can encrypt anything in Bob's public key. Alice needs something to bind herself to this key. Once again, a digital signature can be used for such a binding. Alice can sign the key and encrypt both the key and her signature in Bob's public key. A drawback to this approach is that an RSA signature is the same as an RSA encryption: It can only be done on data that is less the size of the modulus and the result is the size of the modulus. If Alice's RSA private key is the same size as Bob's RSA public key, her signature will be too big to encrypt in a single operation.
Also, the benefit of a DiffieHellman exchange is that each side contributes to the resulting key, no one imposes the key on the other. For many applications this will be an important issue, for others not quite so much.