Saturday 13 April 2024

The Web of Trust

 


Everyday all of us type a web address into a browser or click on a link provided by a search engine and interact with the web sites that are presented.

Whilst we should always be vigilant, on many occasions we will simply trust that the site we are interacting with is genuine and the data flowing between us and it is secure.

Our browsers do a good job of making sure our surfing is safe, but how exactly is that being achieved. How do we create trust between a website and its users?

SSL/TLS

Netscape first attempted to solve this trust problem by introducing the Socket Secure Layer (SSL) protocol in the early 90s. Initial versions of the protocol still had many flaws but by the release of SSLv3.0 in 1996 it had matured into a technology that was able to provide a mechanism for trust on the web.

As SSL became a foundational part of the web, and because security related protocols always have to be under constant evolution to maintain safety, the Internet Engineering Task Force (IETF) developed Transport Layer Security (TLS) in 1999 as an enhancement to SSLv3.0.

TLS has continued to be developed with TLSv1.3 being released in 2018. 

Its primary purpose is to ensure data being exchanged by a server and a client is secured, but also to establish a level of trust such that the two parties can be sure who they are exchanging the data with.

Creating this functionality relies on a few different elements.

Public Key Encryption

Public key encryption is a form of asymmetric encryption that uses a pair of related keys deemed public and private.

The mathematics behind this relationship between the keys is too complex to go into in this post, but the functionality it provides is based on the fact that the public key can be used to encrypt data that only the private key can decrypt.

This means the public key can be freely distributed and used to encrypt data that only the holder of the private key can decrypt.

The keys can also be used to produce and verify digital signatures. This involves the holder of some data using a mathematical process to "sign" this data using their private key.

The receiver of the data can use the public key to verify the signature and therefore prove that the data came from someone who has the corresponding private key.

Public Key Infrastructure (PKI)

Public Key Infrastructure (PKI) builds on top of the functionality provided by public key encryption to provide a system for establishing trust between client and server.

This is achieved via the issuance of digital certificates from a Certificate Authority (CA).

The CA is at the heart of the trust relationship of the web. When two parties, the client and server, are trying to form a trust relationship they must delegate to a 3rd party that they both already trust, this is the CA.

The CA establishes the identity of the organisation the client will interact with via off line means and issues a digital certificate. This certificate establishes the identity of the organisation, its public key and is signed by the CA to prove it was the one that issued the certificate.

A client when it receives the certificate from the server can use the CA's public key to verify the signature and therefore trust the data in the certificate.

It's possible to have various levels of CA's that may delegate trust to other CA's, deemed intermediary CA's. But all certificates should ultimately be able to be traced back to a so called Root CA that all parties on the web have agreed to trust and whose public keys are available to all participants.   

Certificates and Handshakes

All of the systems previously described are combined whenever we visit a web site to establish trust and security.
  • A user types a web address into the browser or clicks a link provided by a search engine.
  • The user's browser issues a request to the web site to establish a secure connection.
  • The server in response sends the browser it's certificate.
  • The browser validates the certificate authenticity by verifying the signature of the Root CA that the certificate is issued from using the public key of the CA that has been pre-installed on the users machine.
  • Once the certificate is validated, the browser creates a symmetric encryption key that will be used to secure future communication between the browser and the web site. It encrypts the symmetric key using the servers public key and sends it to the server.
  • The users browser has now established the identity of the web site, based on the data contained in its validated certificate, and both parties now have a shared symmetric key that can be used to secure the rest of their communication in the session.
There are certain pieces of functionality that are fundamental to allowing the web to operate in the way it does.

Without the functionality provided by SSL/TLS it wouldn't be possible to use the web as freely as we do whilst also trusting that we can do so in a safe and secure manner.   

Monday 1 April 2024

Imagining the Worst

 


In the modern technological landscape the list of possible security threats can seem endless. The breadth of potential attackers and potential vectors for their attacks has never been so large, does this mean we are all just helpless waiting for an attack and the terrible consequences to befall us?

One way to be proactive in the face of these dangers is to try and anticipate what form these treats might take, what damage they could do and what countermeasures it might be possible to take.

Threat modelling is a technique for enumerating the threats a system might face, identifying whether or not safeguards might exist and analysing the consequences of these attacks succeeding. 

To help developers and engineers with the threat modelling process Microsoft developed the STRIDE mnemonic in 1999 to serve as a checklist of things for teams to consider when analysing the potential impact of threats to their system.

STRIDE

The STRIDE mnemonic attempts to categorise potential threats in terms of the impact they may have, this allows teams to analyse if any part of a system may be susceptible, and if so how this might be mitigated.

Spoofing is the process of falsely identifying yourself within a system. This might be by using stolen user credentials, leaked access tokens or cookies and any other form of session hijacking.

Tampering involves the malicious manipulation of data either at rest, for example altering data within a database, or while in transit, for example by acting as a main in the middle.

Repudiation relates to an attacker being able to cover their tracks by exploiting any lack of logging or ability to trace actions within a system, this might also include an attacker having the ability to falsify an audit trail to hide malicious activity.

Information Disclosure occurs when information is available to users who shouldn't be able to view it. This might cover a system returning database records a user has no entitlement to view, or the ability of an attacker to intercept data in transit, again for example by acting as a man in the middle.

Denial of Service is any attack that denies users the ability to legitimately use a system, of which the most common form of attack is to overwhelm a system with requests or otherwise cause the system to become unresponsive or unusable.

Elevation of Privilege occurs when an attacker is able to elevate their permissions within a system under attack, normally this would mean obtaining administrator privileges or otherwise penetrating a network sufficiently to be trusted more than a normal external user.

Threat Analysis

Many tools and processes exist for implementing threat modelling, but most will revolve around a team of system experts brainstorming potential threats that a system or sub-system might be susceptible too.

This involves using analysis helpers such as STRIDE to put yourself in the mindset of an attacker. For example you may asses if an authentication system could be exploited via spoofing. The answer might be no because of certain mitigations, or yes because of certain flaws.

When applying this style of analysis to all the aspects of STRIDE it is unlikely that you will find the system is completely protected against all possible attacks. Instead you're a looking to demonstrate that it is adequately protected given the likelihood of an attack being successful and the benefit that would be gained by an attacker if they were successful.

Security is not a design activity that is ever truly complete and instead will be something that evolves over time. You can either choose to learn by mistakes when attacker are successful or you can attempt to pro-actively preempt this by performing some self critical internal analysis to ensure security levels are the highest they can be.