Posted on 30th August 2019 by Thomas Ballin
Transport Layer Security (TLS) is a series of protocols for encrypted network communications. This is the foundation for being able to trust the security of services like HTTPS.
At its core, TLS relies upon three components:
The three components all interact together to form transport security and are all essential in providing the confidentiality and integrity that many network services require.
The protocols used for transport security have evolved over the years, driven in part by new technology and ways of doing things, but mostly led by research identifying fundamental flaws in the way that the protocols function.
The first protocol to discuss, and a precursor to TLS, is SSLv2. It would seem a strange place to start, but SSLv1 was never publicly released and so this was the first consumer protocol for transport security.
The protocol stood from 1995 until 2011, where its deprecation was announced in RFC6176 (https://tools.ietf.org/html/rfc6176). This document detailed four core deficiencies of the protocol:
SSLv3 ran concurrent to SSLv2 from 1995 until the deprecation of SSLv2. After which, SSLv3 stood until its deprecation was announced in 2015 in RFC7568 (https://tools.ietf.org/html/rfc7568).
The catalyst for the deprecation was a weakness identified by Google engineers, and dubbed POODLE (Padding Oracle on Downgrade Legacy Encryption). This was the first main-stream issue to reach major news headlines
TLSv1.0 was announced in January of 1999, followed by TLSv1.1 in August 2006. A draft RFC was released in June 2018 and, if approved, formally announces their deprecation.
In the interim, these protocols are considered legacy as the cipher suites offered suffer many of the same weaknesses as they did under the SSLv3 implementation. Furthermore, where card data is being processes, the Payment Card Industry Data Security Standard (PCI DSS) has outlawed the use of TLSv1.0.
TLSv1.2 was formally announced in RFC5246 (https://tools.ietf.org/html/rfc5246) and was released in August of 2008. A decade later this was followed by TLSv1.3 which was released under RFC8446 (https://tools.ietf.org/html/rfc8446).
Best practice for security dictates that these are the only protocols which should be supported, as they are the only protocols supporting cryptographically secure cipher suites.
Advice on selecting protocols
TLSv1.2 is over a decade old now and supported by all modern browsers. As such, there is little reason to enable earlier versions for legacy support.
There is a large range of cipher suites available when performing transport security negotiations. These will dictate the algorithms used, and which cipher is appropriate will depend heavily on the context in which they are used.
Cipher suites can be broken down into four components:
Cipher suites are generally written in shorthand. For example, the following denotes a cipher suite supported by https://www.secarma.com:
During the initial negotiation “handshake” a key exchange will take place to provide the client and the server with the secrets required to encrypt and decipher data using the selected algorithm.
The following describe the various common key exchange algorithms:
The two main considerations when implementing a key exchange are as follows:
The length of the secret used to encrypt data is fundamental to the confidentiality of the data; Larger keys exponentially increase the difficulty of brute-forcing the plaintext.
There is a debate with regards to key length and ability to provide assurance, and the ability to brute-force the key is still heavily dependant on the algorithm used, however there is a consensus that a key should be a minimum of 128-bits.
Some key exchange modes afford better security with a smaller key, for instance EC mode relies upon a mathematic property of elliptic curves to provide an equivalent level of security as a key twice their size.
In 2015 a weakness, referred to as LogJam (https://nvd.nist.gov/vuln/detail/CVE-2015-4000), was identified in implementations of the Diffie-Hellman key exchange algorithm which relied upon a modulus of 1024-bits or less when generating secrets. The mitigations to this were to use a modulus >1024-bits, or to use EC mode.
Forward secrecy is a means of assuring that, if a key is compromised, future communications are not at risk. This relies on the ability to generate new keys during each negotiation, and for it to be impossible to derive one key from another.
The encryption algorithm is the heart of the cryptographic process; this is the mathematics that converts plaintext into ciphertext. There are a vast variety of different options available, however some of the most common have been explored below:
RC4 is a stream cipher that has been deprecated in 2015, as was announced in RFC7465 (https://tools.ietf.org/html/rfc7465).
This cipher has known biases which undermine the assurance it can provide, which have led to practical (useable) attacks when used for HTTPS.
DES is a block cipher which operates using 64-bit blocks. The maximum effective key size in DES was therefore smaller than that; and was in fact 54 bits. This is significantly below the minimum required for cryptographic security and so DES has been deprecated.
Triple-DES (3DES) describes a ciphers suite that performs multiple DES encryption operations on a piece of plaintext, to increase the effective key size of the ciphertext without needing to revise the core operation of the cipher suite.
DES originally operated using a 56-bit key, and therefore by performing multiple DES operations the effective key size would be increased sufficiently to avoid deprecation of the algorithm. However, DES suffers from two core weaknesses, weakening the level of assurance it can provide.
The second weakness identified is in the use of 64-bit block sizes when operating in cipher-block chaining (CBC) mode. This block size makes it statistically probable that, when encrypting a large volume of blocks using the same key (e.g. in TLS) there will be a ciphertext collision. This collision can be used to infer the XOR of two blocks of plaintext. This may then be subjected to a range of attacks to recover the plaintext. This is commonly referred to as the Sweet32 (https://nvd.nist.gov/vuln/detail/CVE-2016-2183) weakness.
The consensus is that AES is the standard for modern cryptography. It describes an algorithm that chunks plaintext into blocks and encrypts it in one of several modes.
The number suffixing AES determines the size of the block. Generally, the larger the block-size, the greater the effective strength of the encryption. For example, AES258 uses 258-bit blocks and is considerably stronger than AES128.
Block ciphers can operate in one of four modes, which determine how each block will be encrypted. The modes are as follows:
This mode enables the direct translation between any given block of plaintext and its ciphertext counterpart, meaning that two identical blocks of plaintext would result in identical blocks of ciphertext.
The biggest issue with this, often referred to as the Tux problem (https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Electronic_Codebook_(ECB)), is that an attacker with a large enough volume of ciphertext may be able to use the similarities to infer details about the plaintext.
CBC describes a cipher that uses the last encrypted block to act as the initialisation vector (IV) for the next block. It was developed to mitigate weaknesses in ECB ciphers.
However, where the first IV is of insufficient entropy (as is commonly seen when using cryptographic libraries), this cascades through the ciphertext, effectively mimicking the Tux problem.
Furthermore, CBC-mode block ciphers in MAC-then-Encrypt (MtE) mode, as is most common, suffer from a padding oracle weakness. This occurs because the algorithm fails to perform integrity checking of the entire block, instead only validating the integrity of the plaintext portion. Therefore, an attacker can freely modify any other portion of the block, such as the block padding.
Padding is suffixed to a block to ensure it is of appropriate length, which is essential for CBC-mode to operate. The value of the padding within CBC-Mode is determined as bytes of equal length and value. In the example below, the padding has been highlighted in red:
The decryption routine deciphers the ciphertext, validates and remove the padding, and then validates the integrity of the plaintext.
Where an attacker can modify the cipher block, it is possible to inject an arbitrary padding. This will subsequently cause the decryption routine to either fail to validate the padding or fail to validate the integrity of the plaintext.
Where the error returned is that the integrity of the plaintext could not be validated, the last byte of plaintext can be inferred as equivalent to the value of the padding.
Finally, when operating using block sizes of 64-bits or less, CBC mode is affected by the Sweet32 attack, as previously described.
GCM and CCM are considered the best practice modes of operation. Currently supported under TLSv1.2 and above, they afford the best cryptographic security for block ciphers.
CCM has not been as widely adopted as GCM, in part because it cannot be pipelined or parallelised and is therefore not suitable for higher-bandwidth operations.
Hashed Message authentication codes are the digital signatures used to validate that a portion of ciphertext has not been tampered with. This is essential in mitigating attacks such as padding oracles, as well as ensuring that TLS sessions are not hijacked.
Message authentications may operate in one of two ways:
The advantage of the former is that it does not require the exchange of a separate key to perform the HMAC function, however it does open the algorithm up to cryptographic oracle attacks.
For an HMAC to be effective, it must not be susceptible to collisions within the parameters of the block size to be hashed. For instance, MD5 is not an effective HMAC algorithm for AES256, because it suffers from collisions within the 256-bit space.
Advice on selecting cipher suites:
You are given two options when selecting a cipher suite, preferred and supported.
Preferred is the default as defined by the server and should therefore be the strongest cipher suite available.
Supported is a list of alternate ciphers to be made available if the client does not support the default. In general, this list should only include strong ciphers.
It may sometimes be necessary to support cipher suites that do not make use of GCM or CCM mode. As such, CBC-mode may still be appropriate for data transfer provided a large block-size is used (e.g. 256-bit) and the IV is suitably random
Certificates afford clients to initiate a secure connection without a pre-existing secure channel to negotiate secrets. They do this by providing the connecting client with a public key which can be used to encrypt the initial handshake that the client performs.
A certificate is made up of several key pieces of information, including:
For demonstration purposes, the following certificate will be used:
The Subject determines that the certificate is permitted to identify any hostname defined in either the “Common Name” or “Subject Alternative Name” fields.
Certificates may also be configured with wildcard characters in their subject fields. These enable arbitrary subdomains to be defined. For example, the following would identify any host under the secarma.co.uk domain:
The risk with using wildcard characters is that, if the certificate keypair is compromised on one host, this compromises the transport security of all other hosts that use the certificate.
The issuer is the authority that assures the validity of the certificate. Several trusted public certificate authority bodies exist for this purpose, although a private issuing authority may also be implemented in a context where clients may be expected to install their own certificate authorities (for instance on an internal domain).
The integrity of all transport security depends on implicit trust of these certificate authorities; if a trusted authority was compromised then this would undermine the ability to trust transport security.
The signature is a HMAC generated by the issuer to confirm that the public/private keypair is the same as has been installed on the service being connected to.
The issuer does this by encrypting a hash of the service’s public key with the issuers private signing key. A client may then decipher this encrypted value using a local copy of the issuers public signing key, and if the decrypted value is the same as the public key hash then that indicates that the certificate is valid.
For a signature to offer cryptographic assurance, the keys used must be of sufficient length (2048-bit or above) and the hashing algorithm used for the signature must be considered cryptographically strong. Weaker algorithms, such as MD5 and SHA1, should be avoided because of their susceptibility to collisions.
The expiry date is the date at which the certificate should no longer be recognised as assuring the identity of the host. This is a protection to limit the impact of compromised certificates.
Certificates should not be issued with excessive expiry dates (e.g. 10+ years) and a process should be implemented to ensure that new certificates are issued when required.
The server’s public key is used to encrypt the client’s first handshake when negotiating a secure connection. As such, this should be afforded the same cryptographic expectation as other keys, with an appropriate algorithm and key size selected.
The current best practice is to use RSA with a key length of 2048 bits.
Advice on selecting certificates
The first factor to consider when selecting an SSL certificate is the trust that you put in the issuer; Issuers generally have a strong reputation, but research may be needed to ensure that your issuer meets your requirements.
Next consideration will likely be cost. Although this isn’t a security consideration, issuers often charge extra for stronger certificates, and will generally charge per certificate. Having said that, never buy a certificate that uses a weak hashing algorithm, a weak key, or is signed by an authority with a poor reputation.
The final consideration will be practicality; wildcard certificates and those which do not expire for an extensive period of time have a significant advantage when it comes to the ease of installing and maintaining systems. Nevertheless, this will always be a trade-off against piece of mind.
All of these considered, certificates are rarely expensive, and installing them on a properly configured and maintained host should not be a particularly laborious task, so generally the best certificates are worth the investment.
There is a lot to consider when it comes to transport security, and it can be hard to know that the information you’re getting is the latest and greatest.
This is where a trusted advisor can help; Secarma are on hand to guide you through the configuration and implementation of your transport security, to give you the confidence you require.
Share this post: