Friday 28 June 2024

Vulnerable From All Sides

 


Bugs in software engineering are a fact of life, no engineer no matter what he perceives his or her skill level to be has never written a non-trivial piece of software that didn't on some level have some bugs.

These maybe logical flaws, undesirable performance characteristics or unintended consequences when given bad input. As the security of software products has grown ever more important the presence of a particular kind of bug has become something we have to be very vigilant about. 

A vulnerability is a bug that has a detrimental impact on the security of software. Whereas any other bug might cause annoyance or frustration a vulnerability may have more meaningful consequences such as data loss, loss of system access or handing over control to untrusted third parties.

If bugs, and therefore vulnerabilities, are a fact of life then what can be done about them? Well as with anything being aware of them, and making others aware, is a step in the right direction.

Common Vulnerabilities and Exposures 

Run by the Mitre Corporation with funding from the US Division of Homeland Security Common Vulnerabilities and Exposures (CVE) is a glossary that catalogues software vulnerabilities and via the Common Vulnerability Scoring System (CVSS) provides them with a score to indicate their seriousness.

 In order for a vulnerability to be added to the CVE it needs to meet the following criteria.

It must be independent of any other issues, meaning it must be possible to fix or patch the vulnerability without needing to fix issues elsewhere.

The software vendor must be aware of the issue and acknowledge that it represents a security risk. 

It must be a proven risk, when a vulnerability is submitted it must be alongside evidence of the nature of the exploit and the impact it has.

It must exist in a single code base, if multiple vendors are effected by the same or similar issues then each will receive its own CVE identifier.

Common Vulnerability Scoring System (CVSS)

Once a vulnerability is identified it is given a score via the Common Vulnerability Scoring System (CVSS). This score ranges from 0 to 10, with 10 representing the most severe.

CVSS itself is based on a reasonably complicated mathematical formula, I won't here present all the elements that go into this score but the factors outlined below give a flavour of the aspects of a vulnerability that are taken into account, they are sometime referred to as the base factors.

Firstly Access Vector, this relates to the access that an attacker needs to be able to exploit the vulnerability. Do they for example need physical access to a device, or can they exploit it remotely from inside or outside a network. Related to this is Access Complexity, does an attack need certain conditions to exist at the time of the attack or for the system to be in a certain configuration.

Authentication takes into account the level of authentication an attacker needs. This might range from none to admin level.

Confidentially assesses the impact in terms of data loss of an attacker exploiting the vulnerability. This could range from trivial data loss to the mass export of large amounts of data.

In a similar vein Integrity assesses an attackers ability to change or modify data held within the system, and Availability looks at the ability to effect the availability of the system to legitimate users.

Two other important factors are Exploitability and Remediation Level. The first relating to whether code is known to exist that enables the vulnerability to be exploited, and the latter referring to whether the software vendor has a fix or workaround to provide protection.

These and other factors are weighted within the calculation to provide the overall score.

Patching, Zero Days and Day Ones

The main defence against the exploitation of vulnerabilities is via installing patches to the affected software. Provided by the vendor these are software changes that address the underlying flaws causing the vulnerability in order to stop it being exploited.

This leads to important aspects about the lifetime of a vulnerability.

The life of the vulnerability starts when it is first inadvertently entered into the codebase. Eventually it is discovered by the vendor or a so called white hat hacker who notifies the vendor. The vendor will disclose the issue via CVE while alongside this working on a patch to address it.

This leads to a period of time where the vulnerability is known about, by the vendor, white hack hackers and possibly the more nefarious black hat hackers but where a patch isn't yet available. At this point vulnerabilities are referred to as zero days, they may or may not be being exploited and no patch exists to make the software safe again.

It may seem like once the patch is available the danger has passed. However once a patch is released the nature of the patch often provides evidence of the original vulnerability and provide ideas on how it is exploitable. At this point the vulnerability is referred to as a Day One, the circle of those who may have the ability to exploit it has increased, and vulnerable systems are not yet made safe until the patch has been installed.

CVE provides an invaluable resource in documenting vulnerabilities, forewarned is forearmed. Knowing a vulnerability exists means the defensive action can start, and making sure you stay up to date with all available patches means you are as protected as you can be.


Saturday 15 June 2024

The World Wide Internet

 


Surfing the web, getting online and hitting the net are terms that ubiquitous among the verbs that describe how we live our lives. They stopped being technical terms a long time ago and now simply describe daily activities that all generations understand and take part in.

In becoming such commonly used terms they have lost some of their meaning with the web and the internet being interchangeable in most peoples minds. However this isn't the case, they do represent two different if complementary technologies.

Internet vs Web

The Internet is the set of protocols that has allowed computers all over the world to be connected and to exchange data, the so called "network of networks". It is concerned with how data is sent and received not what the data actually represents.

Protocols such the Internet Protocol and the Transmission Control Protocol allow each computer to be addressable and routable to allow the free flowing transmission of data.

The World Wide Web (WWW or W3) is an information system that builds on top of the interconnection between devices that the Internet provides to define a system for how information should be represented, displayed and linked together.

Defined by the concepts of hypermedia and hypertext it provides the means by which we are able to view data online, how we are provided links to that data and that data is visually depicted.

The History of the Internet

As computer science emerged as an academic discipline in the 1950s access to computing resource was scarce. For this reason scientist wanted to develop a way for access to be time shared such that many different teams could take advantage of the emerging technology.

This effort culminated in the development of the first wide are network Advanced Research Projects Agency Network (ARPANET) built by US Department of Defence in 1969.

Initially this interconnected network connected a number of universities including the University of California, Stanford Research Institute and the University of Utah.

In 1973 ARPANET expanded to become international with Norwegian Seismic Array and University College London being brought onboard. Into the 1980s ARPANET continued to grow and started to be referred to as the Internet as short hand for Internetwork.

In 1982 the TCP/IP protocols were standardised and the foundations of what we now know as the Internet were starting to be put in place.

The History of the Web

The Web was the invention of Sir Tim Berners-Lee as part of his work at CERN in Switzerland. The problems he was trying to solve related to the storing, updating and discoverability of documents in large datasets being worked on by large numbers of distributed teams.

In 1989 he submitted his proposal for a system that could solve these problems and in 1990 a working prototype was completed including and HTTP server and the very first browser named after the project and called WorldWideWeb.

Building on top of the network provided by the Internet the project defined the HTTP protocol, the structure of URLs and HTML as the way that the data in documents could be represented.

In 1993 CERN made the decision to make these protocols and the code behind them available royalty free, a decision that would change the world forever and enabled the number of web sites in the world to steadily grow from tens, to hundreds, to thousands, to the vast numbers that we now take for granted.

Many technologies end up having a profound impact on our lives without its terminology becoming understood by those outside technological circles. But the Internet and the web are different, they are so embedded in our lives that URLs, hyperlinks, web address etc are normal everyday words.

In the early days of ARPANET, and probably also for the WWW project, although those teams may have realised they were working on what could be important technologies I think they wouldn't have anticipated quite where their work would lead. But when an idea is a good one, it can go in many unexpected directions.

Sunday 9 June 2024

What's In a Name

 


Surfing the web seems like straightforward undertaking, you type the website you want to go to into your browsers address bar, or you click on a result from a search engine and within no time the website you wanted to visit is in front of you.

But how did this happen simply by typing a web address into a browser? How is the connection made between you and a website out there somewhere in the world?

The answer lies in the Domain Name System (DNS).

All servers on the internet that are hosting the websites you want to access are addressable via a unique Internet Protocol (IP) address. For example as I write this article linkedin.com is currently addressable via 13.107.42.14. These addresses aren't practical for human use which is why we give websites names such as linkedin.com.

DNS is the process by which these human readable names are translated into the IP addresses that can be used to actually access the websites content.

DNS Elements 

Four main elements are involved in a DNS lookup.

A DNS Recursor is a server that receives queries from clients to resolve a websites host name into an IP address. The recursor will usually not be able to provide the answer itself but knows how to recursively navigate the phone directory of the internet in order to give the answer back to the client.

A Root Nameserver is usually the first port of call for the recursor, it can be thought as like a directory of phone directories, based on the area of the internet the websites domain points at it directs the recursor at the correct directory that can be used to continue the DNS query.

A Top Level Domain (TLD) Nameserver acts as the phone directory for a specific part of the internet based on the TLD portion of the web address. For example a TLD nameserver will exists for .com addresses, .co.uk addresses and so on.

An Authoritative Nameserver is the final link in the chain, it is the part of the phone directory that can provide the IP addresses for the website you are looking for.

DNS Resolution

To bring this process to life let's look at the path of a DNS query if you were trying to get to mywebsite.com.

The user types mywebsite.com into their browser and hits enter, the browser then asks a DNS recursor to provide the IP address for mywebsite.com.

The recursor first queries a root nameserver to find the TLD nameserver thats appropriate for this request.

In this example the root nameserver will respond with the TLD nameserver for .com addresses.

The TLD nameserver will then respond with the authoritative nameserver for the websites domain, in this example mywebsite.com, the location of this server will be related to where the website is being hosted. The authoritative nameserver then responds with the IP address for the website, the recursor returns this to the users browser and the website can be loaded.

DNS Security

DNS is one of the fundamental technologies that has its origins in the foundation of the web. At this time when the blueprint of the web was being created security was less of a concern to those solving these engineering problems, it was assumed that the authenticity of the links in the chain could be taken on trust.

Unfortunately in the modern web this level of trust in other actors can be misplaced. When a server claims to be the authoritative nameserver for a particular website how can you trust that this is the case and you aren't going to be directed to a rogue impersonation of the website you are trying to reach.

Domain Name System Security Extensions (DNSSEC) is attempting to replace the trust based system with one that is based on provable security. It introduces the signing and validation of the DNS records being returned from the various elements involved in a DNS query so that their authenticity can be determined.

DNS is one of the technologies that is now taken for granted but solves a problem without which the web as we know it wouldn't be able to exist. On the surface it sounds like a simple problem to solve but the scale of the web means even the simplest of solutions has to be able to scale to a world wide scale.