Skip to content
Introduction.tex 11.8 KiB
Newer Older
Luker's avatar
Luker committed
% !TeX root = Fenrir.tex

%\part{Analysis of the problem}[problems.png][This is how I explain computer problems to my cat. My cat usually seems happier than me]

Luker's avatar
Luker committed
\xkcdchapter[0.4]{Introduction}{abstraction}{We are working purely for your kittens' videos}
Luker's avatar
Luker committed


\lettrine{A}{lmost} all applications nowadays require an internet connection. Privacy and security of said applications communications is getting more important every day.
All of said security is based on complex cryptography and authentication algorithms, but only a small part of the developers actually study these algorithms.
Luker's avatar
Luker committed
Fortunately we have projects like (open/libre)ssl whose only purpose is to implement these algorithms correctly and provide the programmer easy interfaces to the security software, so that the developers can concentrate on the actual application. However these projects are neither interchangeable nor equivalent, so the programmers still needs to choose according to their needs. This also mean that a lot of programmers just use the most common solutions, and adapt their program instead of doing proper research.
Luker's avatar
Luker committed

The security of any application does not come only from the security of the underlying protocols. Many applications still save passwords using the weak md5 hash, are are prone to SQL injections, other do not bother configuring the proper parameters for the security frameworks used, and virtually all of them are vulnerable to password reuse. With just these few examples it is easy to understand why security is so hard to maintain, and needs to be a core design goal.


\section{Common attack vectors}

The three main places where security is concentrated are:
\begin{itemize}
	\item the underlying frameworks
	\item the programmer's application
	\item the final user
\end{itemize}

The underlying security frameworks are fairly stable today, as after the Assange and Snowden revelations the security community has put a lot of effort in securing and analysing the existing solutions. However this does not mean that the current solutions were designed correctly. The first version of SSL was never publicized, SSLv2 was released in 1995 due to flaws in the first version, and a year after SSLv3 had to be released to fix more protocol flaws. TLS than fixed some SSLv3 shortcomings and flow, and we are now almost at version 1.3, after many flaws and attacks. This because the protocol was never formally tested, and the few attempts had an incomplete specification.
OAuth suffered the same fate, and even new protocols like SPID \cite{SPID} have never been formally tested. We will see how and why such verification is needed in chapter \ref{Formal Verification}.


The programmer however is still in charge of configuring said frameworks, plus creating its own authentication interface. This puts a lot of strain on the developers, as it has to learn more security details than needed, forcing the focus away from the application itself. Not only is the developer tasked with understanding security mechanisms such as the correct way to do password hashing, but by continuing on this model we create a lot of duplicate code, which is often not as secure as it should be. Such an example is the common user identification, were each developer must rewrite the hashing mechanism, the password prompt and the session identifications, which in the case of web applications (cookies) should not reveal the user information.



Finally, the users never really learned to not reuse the same 4 character passwords. The final users generally do not realize yet they are one of the weakest links in the security chain, and often underestimate threats such as weak password use or reuse. The password reuse problem comes from the fact that current authentication protocols do not have any support to federation of domains, so the user is forced to have many different user/password pairs. The problem is further complicated by the fact that by using mere user/password identification, the user can never be sure that its account has not been stolen, or is being accessed from multiple, different devices not under its control.


\section{Layering}

Luker's avatar
Luker committed
The common approach to security, and application development in general is multiple layering, and the OSI layers are a perfect example of this. Developers initially choose the transport protocol (TCP/UDP), than add encryption (TLS/DTLS), then further session and data presentation handling (HTTP), and finally develop the application.
Luker's avatar
Luker committed

Layering has increased the overall modularity of the protocol, but excessive or wrong layering can increase the complexity of the application, leave open to multiple attacks and limit the application's network capabilities.

An example of a problem in security layering comes having TLS on top of TCP. This leaves the application open to spoofed TCP resets. This same problem is often found in wireless standards, were the disassociation packet is not authenticated, so an attacker can force multiple reauthentication to gather more authentication data. Such a design flaw greatly increased the WEP attacks efficiency.

While layering increases modularity, the same modularity can limit the application efficiency and capabilities. Web applications have to go through XML, HTTP, TLS and TCP encapsulations, but are limited to one bytestream of reliable data transfer, so multipath and lossy communications (useful in audio/video or realtime contexts) are impossible, unless the application itself handles these
use cases. This means that layering can increase complexity every time an upper layer drops some functionality of the lower layer.

Luker's avatar
Luker committed
\label{OSI stack}
Excessive layering can also merely shift the complexity upwards in the layers, up to the application layer. An example of this is the session management in web applications. Analysing the protocols from bottom up we find the TCP layer, which introduces its own session identification (the IPs-ports tuple), on top of which there is the TLS session (with identifiers and encryption keys), on top of which we find the HTTP protocol, which drops support for sessions. The developer now has to safely design cookies to handle session again, by putting user information in the cookie, maybe using encryption again to safely hide the information. The user information however comes from the OAuth layer, which works on top of the HTTP/cookie layer. On top of all of this then we can finally find a Javascript that can talk to the HTTP server and activate web sockets, dropping the whole HTTP/cookie/HTML layer and working directly on top of the TLS layers, implicitly tying the user information down to the TLS session again.
Luker's avatar
Luker committed




\section{Trusting trust problems}

A commonly overlooked problem in the security field is understanding what the trusting model is. That is, understanding what amount of trust the user has to give to what part of the whole system.

The first thing the user has to trust is the application itself. The application will receive the username and passwords, which are often reused, along with additional user information. But when we start using common protocols to access resources (such as using IMAP for reading email), we also need to decide which application to trust, to avoid leaking information such as contact lists or whole account access to third parties.

Trust does not come only from the application. X.509 certificates create a hierarchical structure of trust, were we give our trust to the certificate authorities, and expect them to behave correctly. This is not always the case, as some authorities like COMODO and even Symantec have issued certificates to unauthorized parties, and sometimes even for big companies such as Google.

Finally, some protocols (Kerberos, OAuth) rely on a trusted third party for identification. Centralizing the authentication on a third party might decouple the application from the authentication, improving the security model, but gives such a third party the complete control over our accounts. So right now having a Facebook account means trusting Facebook to never access all the services on which you used the "login with facebook" functionality, or creating new accounts elsewhere in your name without notifying you.


\section{The contribution}

This dissertation introduces a new protocol that resolves the previously presented problems, by introducing a new authentication algorithm directly tied to the transport protocol. Instead of working on the interaction and synchronization of multiple layers, a new single layer is introduced on top of layer 3 which handles at the same time transport, authorization, authentication and encryption.

By handling all the transport details the protocol will be able to provide all combinations of reliable, unreliable, ordered, unordered delivery. The user will be able to get the data as a bytestream, like a common socket, or by complete messages, as the protocol can mark the beginning
and end of each user message. Multiple concurrent streams will be usable, for parallel data transfers. By handling all the transport and encryption the protocol is able to grant features like multihoming and IP mobility without introducing additional complexity for those cases.

The goal of this protocol is to decouple the development of an application from its security. The first step is the definition of a federated authentication algorithm, which separates the server application from the authentication by introducing an authentication server. Then the same split is applied on the client side, taking away the handling of the authentication from the various client applications, and delegating it to a dedicated application.

Luker's avatar
Luker committed
By decoupling authentication from the application on both the server and client side, much of the authentication process can be based on token, made more automatic, less dependent on the user, who will not have to remember many different combinations of usernames and passwords.
Luker's avatar
Luker committed

The existence of an authentication server will provide a centralized way for the user to manage its subscriptions and its devices, but the authentication server will not be able to impersonate the user on other services. This important distinction will shift the focus of attacks away from the authentication server, as compromising it will not grant access to the users data.

Finally, an authorization lattice is introduced to handle multiple authorization levels. This will permit the user to forcibly limit the trust it puts in third party applications.


\section{Dissertation structure}

Chapter \ref{Analysis} will define the ideal requirements of a security framework, enumerating and describing each objective.

Luker's avatar
Luker committed
The state of the art in protocols and security will be analysed in Chapter \ref{Existing Solutions}. Here we will look at the different layers commonly used in the applications stack, ranging from TCP up to OAuth. New and yet experimental protocols will be taken into consideration, along with their limits and implications.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
After describing why the current protocols can not provide a sufficient solution for our needs, chapter \ref{Fenrir Project} will firstly define our options in creating a new protocol. The dissertation will then move on to the basic design ideas and algorithms. A detailed analysis of the general security properties offered by the protocol will be followed by an in-depth look at the authorization and authentication mechanisms, where we introduce the concept of authorization lattice.
Luker's avatar
Luker committed

Chapter \ref{Validation} aims at discussing the need of a formal verification, and presenting the work done in verifying the lack of attacks towards the authentication algorithm, with a focus on the handshakes.

Implementation choices will be discussed in chapter \ref{Implementation choices}, where we will discuss the packet structure and some of the differences with the existing protocols, with examples of data packets.

Luker's avatar
Luker committed
Finally Chapter \ref{Conclusions} summarizes the achievements and future directions of the protocol development.