Skip to content
Project.tex 77.7 KiB
Newer Older
Luker's avatar
Luker committed
% !TeX root = Fenrir.tex

Luker's avatar
Luker committed
\xkcdchapter{The Fenrir Project}{standards}{The good thing about standards is that\\there are many to choose from...}
\label{Fenrir Project}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\lettrine{S}{ince} the current protocol stacks limit the developers in the application development, and there is not a sensible authentication protocol that supports authentication and authorization, this chapter is dedicated to the definition of a new one that encompasses as many as the possible requirements \ref{Requirements} as possible.
Luker's avatar
Luker committed

Luker's avatar
Luker committed

Luker's avatar
Luker committed
\section{The ISO Stack}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The first problem is to understand where we want to put our protocol in the ISO stack. As we have seen in \ref{ISO stack}, an incorrect placement in the stack can result in overly complex layering that push the burden of making everything work on the developer.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
As a remainder, the ISO model is composed of 7 layers:

\begin{center}
\begin{tabularx}{0.35\textwidth}{| c | X | X |}
	\hline
	7&Application & \\ \hline
	6&Presentation & JSON\\ \hline
	5&Session & RPC\\ \hline
	4&Transport & TCP/UDP\\ \hline
	3&Network & IP\\ \hline
	2&Data Link & MAC \\ \hline
	1&Physical & Ethernet\\ \hline
\end{tabularx}
\end{center}

The TCP/IP model however does not follow the model strictly, as the layers are not always independent one of another. If we want to portray what actually happens in a common web connection, we end up with something like this, where multiple layers limit the upper layers capabilities, and a lof of features (like session managment) are reimplemented multiple times:

\begin{center}
	\begin{tabularx}{0.35\textwidth}{| X | X | X |}
		\hline
		\multicolumn{3}{|c|}{Application}\\ \hline
		OAuth& cookie & HTML\\ \hline
		\multicolumn{3}{|c|}{HTTP} \\ \hline
		\multicolumn{3}{|c|}{TLS} \\ \hline
		\multicolumn{3}{|c|}{TCP} \\ \hline
		\multicolumn{3}{|c|}{IP}\\ \hline
	\end{tabularx}
\end{center}

The reason for which OAuth, cookies and HTML are on the same layer is that they all need direct access to the HTTP headers, and are all dependent one from the other. The application then has to keep all their interaction together.

Keeping in mind what we are trying to avoid, let's analyse our choices:
Luker's avatar
Luker committed

\subsection{High level protocol}

Luker's avatar
Luker committed
By "high level" we mean anything above HTTP.

If we try to fix the situation with another high level protocol (like OpenID-Connect is trying to do) we gain ease of implementation due to the abstractions of the lower level protocols and their security properties, but we are also limited by them. Efficiency is also greatly impacted, and we
might have to rely on other protocols to avoid the limitations of the protocol stack we choose (like OpenID-Connect has to rely on webfinger to avoid OAuth non-interoperability).
Luker's avatar
Luker committed

Luker's avatar
Luker committed
This means that our quest for simplicity will lead to a contradiction, as the more protocols need to be used, the more the attack surface increments, while we need to handle all the protocol interactions and limitations.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
As stated, this is the road chosen by the OAuth and OpenID-Connect authors, so there is little to gain from choosing it again.
Luker's avatar
Luker committed

\subsection{Low level protocol}

Luker's avatar
Luker committed
Since we can not go much high in the OSI model, we need to understand how low we should go, and what will change for our protocol. Going lower or at the IP layer will break the very first requirement, the compatibility with the existing infrastructure. The same could be said for working at the TCP/UDP layer, as too many firewalls would require reconfiguration. Fortunately in this case we could think about a transitional period where the protocol is tunnelled via UDP, and when enough support has been gained, we might switch directly above the IP layer.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
We have upper and lower bounds, let's analyse the case-by-case problems:
Luker's avatar
Luker committed
\begin{itemize}
Luker's avatar
Luker committed
	\item \textit{rewrite the HTTP layer}: we are still fairly high in the protocol stack, still limited by TCP, we still have duplicated session identifications (TCP,TLS), but we should be able to implement the federated support. However we require a new TCP port to be assigned, to not interfere with existing protocols, but making us interfere with strict firewalls. 
	\item \textit{rewrite the TLS layer}: same as above, but now we also have to handle \textbf{secrecy} and \textbf{authenticity}. We still require a new port assignment.
	\item \textit{rewrite the TCP/UDP layer}: we get complete liberty of communication features, and might be able to implement our full list of requirements \ref{Requirements}. We do not require new port assignments, and if we use UDP tunnelling we still retain compatibility with the existing firewalls. This is the road chosen by QUIC and minimaLT.
Luker's avatar
Luker committed
\end{itemize}


Luker's avatar
Luker committed
At first glance this is much more complex, as we need to reimplement everything from TCP up to OAuth in a single solution, but we can gain a lot of features from experimental protocols, and add the federation and authorization support, which is found virtually nowhere. The overall stack becomes much shorter, and there can be less feature duplication (like the case of session identificators).
Luker's avatar
Luker committed

Luker's avatar
Luker committed
To summarize what we can achive here, we can gain in:
\begin{itemize}
	\item \textbf{Robustness}: we can design against amplification attacks, and avoid design problems like cleartext TCP RST which will drop any connection.
	\item \textbf{Efficiency}: less layers mean less encapsulation, and less headers before the user data.
	\item \textbf{Federation}: we can finally design the protocol so that authentication on multiple domains is the same, by including domain discovery techniques.
	\item \textbf{transport flexibility}: \textbf{multiplexing} support, and choosing every stream transport features (\textbf{reliability, ordered delivery} etc..)will increase application features while simplifying the application development.
	\item \textbf{multihoming/mobility}: finally design a protocol whose connection status is not dependent from layer 3 (IP) data.
	\item \textbf{datagram}: handling message begin/end despite of packet data fragmentation will further simplify user data management.
\end{itemize}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
This seems obviously more work, but the overall amount of code used for the whole protocol stack will be much less, thus reducing the attack surface.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
~\\
Luker's avatar
Luker committed

Luker's avatar
Luker committed
As we are talking about a new, experimental protocol, the obvious choice should be this one. To avoid the SCTP/DCCP mistakes, the protocol will need to be able to work both on top of UDP (for bypassing firewall and NAT problems) and directly on top of IP (for efficiency) seamlessly, so we should also take into account a transitional phase between UDP based and IP-based transport.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Not only the attack surface will be reduced, especially after the code base stabilizes, but there will be no need to analyse the interaction between multiple protocols, thus simplifying the development and analysis phase.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
By not having to rely on old technology we will be able to fully control the security properties of the system for the first time in the development of a protocol.
Luker's avatar
Luker committed
It will seem that we are creating an overly complex protocol, but when comparing the result with the number of protocols we are aiming at replacing (TCP/UDP-(D)TLS-OAuth and more) the complexity of our solution will obviously be inferior to the total complexity of the possible interaction of existing
solution, not to mention the lack of analysis of the security properties of the interaction of the different possible user configurations.
Luker's avatar
Luker committed


Luker's avatar
Luker committed
\section{Fenrir outline}
Luker's avatar
Luker committed

\begin{figure}[h]
    \centering
    \includegraphics[width=0.5\textwidth]{images/Fenrir_logo.png}
    \caption{Fenrir Logo}
    \label{fig:Fenrir_Logo}
\end{figure}



Luker's avatar
Luker committed
\subsection{Federated Authentication}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The main feature of our protocol is the federated authentication. This means that we will need some form of interaction between the server of multiple independent domains, with each trusting only their users for authentication. Therefore, each user will be identified by its user and domain, in email-like format.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
For this, we use the distinction provided by Kerberos for our system, and we divide the players into three (plus one):
Luker's avatar
Luker committed

\index{Client Manager}\index{Authentication Server}\index{Federation}
\begin{itemize}
\item \textbf{Authentication Server}: in short: \textbf{AS}. Handles authentication and authorization for its domain.
\item \textbf{Service}: the service the client wants to use. Be it the mail service or a web service.
\item \textbf{Client}: the user program that connects to a \textit{Service}
\item \textbf{Client Manager}: the program that manages authentication data for the user.
\end{itemize}

Luker's avatar
Luker committed
\subsection{Decoupling authentication from application}\index{Authentication!Decoupling}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The first two distinctions are fairly straightforward: the authentication server will handle authentication, so that the service can be designed without having access -for example- to the password database or to the user login. This is an important distinction, as the applications are vulnerable to bugs and have a much higher attack surface. By decoupling the two, the user and password databases should be better protected, as the only application that has access to them is the one especially designed to protect them.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The distinction between \textit{Client} and \textit{Client Manager} has the same purpose. Current applications usually save login information in cleartext, or in a poorly-obfuscated manner (base64 and alike). For the same reasons as before, we want to decouple application authentication from the
application itself. This will permit the system-wide usage of strong authentication methods like security tokens or smart cards, provide better support for authentication algorithms, instead of having clients rely on old methods like the deprecated SSLv3. Over time this will provide better security for the next legacy applications, as the security of the application will be upgradable regardless of the application itself.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Decoupling authentication from the application will have one more interesting outcome: as the \textit{Client Manager} will handle both authorization and authentication, it will be able to limit the application scope, so that the user will be able to limit the applications it does not trust, or that only need to check for the existence of an account.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
This means that both the \textit{Authentication Server} and \textit{Client Manager} will be the primary subject of the attacks towards Fenrir, but this also means that the attack surface will be much lower, and the efforts can be concentrated in a single software. As popular software is migrating towards the web, this situation is increasingly common anyway, since the web browsers need to track each and every user password. since users never care enough about security the password database will be as good as in clear text. Moreover the attack surface of a browser is huge, especially thanks to its plugin system.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\subsection{The authentication protocol}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Federated authentication algorithms are nothing new. As per our earlier distinction, we will focus on a kerberos-like infrastructure.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Due to the previously introduced decoupling, our protocol needs some way to convey the various interacting players that a user has a certain authentication and authorization. This is done through the use of a token, so that login and passwords will not reach the \textit{Client} or the \textit{Service}, and are used as little as possible.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
One characteristic of many authentication algorithms is the usage of timestamps to protect the authorization tokens or general messages. While this provides safety by putting an expiration time on the usage of said tokens, it also means that applications, servers and authentication servers must
have at least a loosely-synchronized clock.
Luker's avatar
Luker committed


Luker's avatar
Luker committed
Although nowadays clock synchronization seems easy and widespread, it is not yet at a state were we can safely assume that the clocks have little discrepancy. Embedded devices are still produced without a clock source, so each time they are booted the clock is reset to 1970. The most famous clock synchronization protocol (NTP) is used almost always in clear text, and basing our clock on the attacker's response is not wise.

Requiring all clocks in the world to be synchronized between a couple of minutes from each other, even for devices that do not have stable clock sources is in our opinion wrong. Therefore Fenrir will \textit{not} use timestamps. This will mean that occasionally additional round trips will be needed to check the validity of the data, but this also means that tokens can be simplified, as they do not need signatures anymore, and a token revocation will be effective immediately.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
This choice is further supported by the existence of the \textbf{Online Certificate Status Protocol} \cite{OCSP}. X.509 certificates are essentially proof of authentications granted for a fixed time (mostly one year). For a number of years there was no way to revoke a certificate in a useful way, as the CRL (Certificate Revocation Lists) were updated very slowly. Once such CRLs became not only too slow, but also too big, OCSP was introduced to check in real-time if the certificate had been revoked or not. This alone proves that basing our protocols on time differentials alone is not sufficient.
Luker's avatar
Luker committed

Luker's avatar
Luker committed

As the figure \ref{fig:FederationOutline} describes it, the protocol will relay heavily on the authentication servers, which will act as a trusted third party. The image is only an outline, and assumes a shared token between client@example.com and the authentication server on example.com
Luker's avatar
Luker committed


\label{FederationExample}
Luker's avatar
Luker committed
\begin{figure}[t]
Luker's avatar
Luker committed
\centering
\begin{framed}

\centering
\begin{tikzpicture}[node distance=4cm,>=stealth]
\node[label=above:{Auth.Srv example.com}] (AS1) {\includegraphics[width=2cm,keepaspectratio]{images/auth_server.png}};
\node[label=above:{Auth.Srv domain.com}, right= 3.5 cm of AS1] (AS2) {\includegraphics[width=2cm,keepaspectratio]{images/auth_server.png}};
\node[below of=AS2,left=2.5cm of AS2, label=below:{Client example.com}] (C) {\includegraphics[width=2cm,keepaspectratio]{images/computer.png}};
\node[below of=AS2,right=1.5cm of AS2, label=below:{Service domain.com}] (S) {\includegraphics[width=2cm,keepaspectratio]{images/server.png}};

\draw[<-,thick] (AS2.180) -- node[below]{$1: auth, use ``service''$} (C.90);
\draw[<->,thick] (AS1.30) -- node[below]{$2: check account$} (AS2.150);
\draw[->,thick] (AS2.340) -- node[right]{$3: new user: id, keys$} (S.60);
\draw[<-,thick] (AS2.300) -- node[below]{$4: user registered$} (S.120);
\draw[->,thick] (AS2.250) -- node[below]{$5: ok: ip, keys$} (C.20);
\draw[<->,thick] (C.340) -- node[below]{$6: communication$} (S.200);
\end{tikzpicture}
\end{framed}
\caption{Fenrir overview: client@example.com connects to ``service'' in domain.com}
Luker's avatar
Luker committed
\label{fig:FederationOutline}
Luker's avatar
Luker committed
\end{figure}


Luker's avatar
Luker committed
The service will not receive the login data, and in fact will only get a connection id, cryptographic keys and an internal user id. As confirmation, the client will receive the ip, connection id and keys to connect to the service, so that no other handshakes are needed. Moreover, since the client receives a notification only when the connection has been confirmed between the authentication server and service, we avoid non intuitive errors like ``authenticated but not connected'', typical of federated protocols like Kerberos or OAuth, were the authentication server is too detached from the service.

The authentication server will receive a lot of trust from the user, as it needs to control its authentication and authorization data, but it will not be able to impersonate the user on different domain's services, as the service and the user will also share a secret key. There will still be room for impersonating a client on the same domain, although that is technically unavoidable with any design due to the service and the authentication server belonging to the same organization (authentication and logs can always be forged by administrators that control the services).

~\\
Luker's avatar
Luker committed



Luker's avatar
Luker committed
We will now follow a detailed bottom-up approach at designing the protocol.
Luker's avatar
Luker committed


\section{Transport}

\subsection{Layer 4: UDP tunnel}\index{Transport UDP}

Luker's avatar
Luker committed
The SCTP protocol history tells us that introducing new functionalities while being incompatible with existing network infrastructure is dooming ourselves to fail. However it also tell us that a UDP-based protocol can move up to a standalone one, given enough traction. SCTP evolved maybe too quickly from udp based to standalone, firewalls worldwide did not update, and very few applications ended up using it.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
As our initial requirement is the compatibility with existing infrastructure, our protocol will be based on top of IP, but will include an optional UDP lightweight tunnel in its main components so that existing infrastructure will not have problems using it
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Using UDP as a lightweight tunnel permits us to use a single socket for transmission or reception of every connection, without having the kernel track for us every connection. Firewalls will permit UDP connections, as the DNS system is based on that, and NATs will continue working as always.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Using UDP permits us to handle everything in user space, and the only thing that a non-UDP connection will need from the kernel is simply that it forwards the packet as-is to the right application.
Luker's avatar
Luker committed


\subsection{Layer 4.5: Fenrir}

\subsubsection{Connection}

The very first thing we need is something to identify the connection. older protocols use the tuple of source IP, destination IP, source and destination PORT,
but this is unnecessary and spans multiple protocols which should be independent. The solution is simply to use a single identifier as the connection id.
This alone grants independence from the IP layer, enabling us to support multihoming and mobile clients, and all encryption data will be based on this id.

Luker's avatar
Luker committed
Once we identified the connection we need to check whether it is a legitimate packet. In order, the first things we will do is check if the packet is corrupted through an error correction code, then check the packet legitimacy by checking a cryptographic header (HMAC like), then finally decrypting the packet and access its contents. The last two steps can be condensed into one if we use AEAD ciphers (Authenticaed Encryption -- Additional Data).
Luker's avatar
Luker committed

The algorithm to used will be decided during handshake: this might make us use one full RTT, but unlike minimaLT, we won't be tied to a single algorithm,
so that if problems are found in the future, we can simply shift to a new algorithm instead of throwing away the whole protocol.

With this setup, all data out of the connection id will be encrypted and authenticated. Regarding the encryption,  4 methods are available:

\begin{itemize}
\item \textbf{MAC then Encrypt} : First authenticate the cleartext data, then encrypt everything. Used in TLS.
\item \textbf{Encrypt AND MAC} : authenticate the cleartext, encrypt the cleartext and transmit both the authenticated and encrypted payload in cleartext.
Used in SSH.
\item \textbf{Encrypt then MAC} : Encrypt the cleartext, then authenticate the encrypted cleartext. Used in IPSEC.
\item \textbf{Authenticated Encryption}: new algorithms can authenticate data while encrypting it, so that no additional headers are required.
\end{itemize}

These various methods were analyzed in 2008\cite{Bellare:2008:AER:1410264.1410269}, and in 2009 even the ISO/IEC committee proposed a
standard\cite{ISOIEC19772} where the recommended mode of encryption is \textbf{Encrypt-then-MAC}. Authenticated encryption algorithms were born later
but are regarded to be as secure as the ISO proposed method.

A short summary of the analysis mentioned earlier:
\begin{itemize}
\item \textbf{MAC then Encrypt} : Only the cleartext is authenticated. If the cipher is malleable, the attacker can change both the MAC and the cleartext.
This is what happened in WEP.
\item \textbf{Encrypt AND MAC} : The MAC is not secret, thus can reveal some information on the cleartext (especially in context were data is mostly static).
It does not grant ciphertext integrity.
\item \textbf{Encrypt then MAC} : ciphertext integrity is granted, so cleartext integrity is granted by composition. Authenticity can be verified before
decryption.
\end{itemize}

Luker's avatar
Luker committed
Due to these reasons, the only modes available in Fenrir will be AEAD and encrypt-then-MAC. An example of a famous AEAD cipher is OCB3\footnote{\href{http://www.cs.ucdavis.edu/~rogaway/ocb/}{OCB Homapage: papers and implemetation: http://www.cs.ucdavis.edu/~rogaway/ocb/}},
which is now an IETF draft.\\
Luker's avatar
Luker committed
Recently the CAESAR\cite{CAESAR} competition was created to determine a standard from the various AEAD ciphers, the winner is expected to be announced
somewhere towards the end of 2017 / beginning of 2018.

Due to the inclusion of the AEAD ciphers, and the fact that different authentication headers can have different lengths, the actual length (or existence) of
the authentication header is tied to the connection and decided during the handshake, and thus will not be specified in the packet.

To help in debugging and for specific applications, cleartext data transmission should be supported (although always authenticated). While authentication
is always needed, encrypting public data might be a lot of work for resource constraint devices, so it should be possible to explicitly disable it.

\rowcolors{1}{green}{green}
\begin{tabularx}{0.7\textwidth}{| l || X | X | X | X |}
\hline \hline
\hiderowcolors bytes &0&1&2&3\\ \hline \hline
  0-3 & \multicolumn{4}{c|}{Connection id} \\ \hline
  & \multicolumn{4}{c|}{Encrypted data} \\ \hline
    & \multicolumn{4}{c|}{Authentication header} \\ \hline \hline
\end{tabularx}

\subsubsection{Multi Stream}\index{stream}

After granting secrecy and authenticity, we need to provide an easy way to multiplex the user and protocol data.

Not providing a multiplexing service will bring developers to recreate protocols like FTP, using multiple connections, which is hard to track for firewalls and
is more complex to maintain from the programmer point of view. This probably contributed to developers porting all their applications on the web, as all
those details are now handled by the web server. Then to avoid the limitations of the HTTP protocols the websockets and webrtc standards were created, so
that applications could ``downgrade'' the HTTP connection to a raw socket.

SCTP was the first protocol to try to fix these problems, and more recently SPDY(HTTP2) and QUIC implemented its main feature,
which is having \textbf{multiple streams}, so that the programmer can use one stream per resource, thus parallelizing network flow and managing multiple
connection characteristics (e.g.: unreliable vs reliable delivery) for the user.

Fenrir uses the same base idea as SCTP, so while QUIC and other protocols provide only multiplexing, Fenrir can handle both datastream transmission and
datagram with message boundaries, thus relieving the user of the need to parse and handle message lengths or escape sequences to handle the begin and end
of the user messages.

To implement this, we need an header similar to the SCTP one, but it can be much simpler: we only need a stream id, the data length, a counter to be able
to reorder different fragments, and some flags to signal whether this is the first fragment, end of fragment, or a full user message.

Again, in order to provide maximum flexibility and to easier port existing applications to Fenrir, the protocol must not limit itself to ordered, reliable and
datastream delivery, or the programmer will be forced to use -yet again- other protocols in his application, thus decoupling the security provided by Fenrir
from the security of the application (which we want to avoid).

Although the setup is simple, we can now have a multiplexed transfer protocol, where each stream can handle a combination of ordered, unordered, reliable,
unreliable delivery in datastream or datagram mode, with automatic fragmentation between multiple packets.

SCTP has a lot of different headers, each having its own structure and purpose, but that increases the overall parsing complexity. Instead in Fenrir
even the protocol messages will be carried in a normal (reliable) stream, so we can simplify the parsing and waste less bytes on the header.

~\\

\rowcolors{1}{green}{green}
\begin{tabularx}{0.7\textwidth}{| l || X | X | X | X |}
\hline \hline
\hiderowcolors bytes &0&1&2&3\\ \hline \hline
  0-3 & \multicolumn{4}{c|}{Connection id} \\ \hline
  & \multicolumn{4}{c|}{Encryption Header} \\ \hline
\showrowcolors  4-7 & \multicolumn{2}{c|}{Stream id} & \multicolumn{2}{c|}{Data length} \\ \hline
  8-11 & flags & \multicolumn{3}{c|}{Stream counter} \\ \hline
\hiderowcolors    & \multicolumn{4}{c|}{Authentication header} \\ \hline \hline
\end{tabularx}
\begin{center}
The packet so far. Green color highlights the encrypted data
\end{center}

Since we have a full byte for flags, we can allocate the bits as follows:
\begin{itemize}
Luker's avatar
Luker committed
\item \textbf{0-1}: start/end of user message. if none are present, it's an intermediate fragment.
\item \textbf{2-7}: used as high-order bits of the next ``stream counter'' field, thus bringing the counter up to $2^{30}$ bits. See the \ref{stream counter}
for more in-depth explanation.
Luker's avatar
Luker committed
\end{itemize}

Having a bigger stream counter will help in the case of high-latency-high-bandwidth links (like satellite links), were the time spent waiting for the ACK of the
first packet might be as much as the time spent filling the sliding window.

Having a message counter will be useful with unreliable, unordered data transfer: the stream counter alone does not tell us whether the lost packets had
a missing start or end flag, so even without rebuilding the data, we might be able to give partial data to the user while distinguishing the association to
different user messages (up to a certain point).

\subsection{Reliability mechanisms}

Since Fenrir is designed to work on top of the IP layer, we need to reimplement retransmission mechanisms and control flow algorithms.

The ACKs are designed based on the SCTP ones, where a single packet can ACK multiple segments of various streams (SACK). This means that
instead of just reporting the last received byte, we report the stream id along with the first and last byte of ack'd data. This allows us to ack even
non-sequential data, so that retransmission is more efficient, just like SCTP.\\

QUIC introduces a \textbf{Forward Error Correction}\index{Forward Error Correction} mechanism for transmitted data, which works a lot like RAID 5:
every two packets a third packet is sent which is the XOR of the previous two packets. This is a novel way to handle error correction in network transmission
protocols, but it is not flexible enough, as the correction does not spawn more than two consecutive packets, and can't be tuned to the average channel error
rate.

To fix this, Fenrir will use the RaptorQ \cite{rfc6330} error correction mechanism, a highly configurable code where an infinite number of error recovery packets
can be generated from a user-defined set of packets. Although the cpu and memory requirements can rise significantly with the set of packets that
we want to protect (quadratic memory, cubic cpu time), applying error correction to a set of a thousand packets is still relatively inexpensive.

Forward error correction will be activated only if requested, and per-stream, so that different reliability statuses can be achieved, depending on what
the programmer needs.



\section{The Handshake}\index{Handshake}

This part is crucial in the protocol design, as it needs to handle connection setup, negotiation of protocol features, cryptography details and more, but in the
less amount of packets possible, to improve efficiency.

Since this is a security protocol, we first need to pay attention to where the trust of the system comes from. The naive choice would be to use the widely
used Authority Certificate model, but this has its own problems. For now, let's just assume we can safely have a public key, somehow, and design the
handshake from that. We will analyze the trust model later in \ref{Trusting trust}.

For now, let's assume we safely obtained a public key and the minimum connection information.

In this critical phase we need to handle user authentication, authorization, can be subject to DDoS or amplification attacks, and needs to be as efficient
as possible.

Current protocol stacks perform poorly against DDoS protection thanks to the lower TCP layer and force the system to perform multiple handshakes
(TCP-TLS-OAuth). Fenrir also needs a flexible handshake, to avoid being tied to a single authentication method or encryption algorithm like minimaLT.

One feature in this area, introduced by both QUIC and minimaLT, is the support for multiple handshake types, depending on the information available
to the client.

The most secure handshakes can take up to 3 full round trips, but minimaLT and QUIC can go down to a 0-round-trip connection. Unfortunately
the handshake length and the overall security of the handshake seem to be inversely correlated, but some concessions can be made in terms of
security if the applications need a very low connection setup time.


Luker's avatar
Luker committed
The next section is dedicated to the security and design of the handshake. for now, we only need a way to quickly distinguish an handshake packet
Luker's avatar
Luker committed
from an established connection.

For this, we reserve the connection id ``0'' to connection setup, leave the packet completely in clear text. The stream identifier will be random,
and will help us track the different handshakes.

For the in depth details of the handshake, see the next chapter.


Luker's avatar
Luker committed
\section{Security}
Luker's avatar
Luker committed
\label{Trusting Trust}

Luker's avatar
Luker committed
\lettrine{W}{e} will now analyze in depth the handshake, the security and the trust model that are the basis of the Fenrir protocol. While doing so
we will try to remember what choices are forced on us from older solutions and trust models, while proposing our improvements.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Authentication and authorization are left to chapter \ref{Credentials}, as that needs an in-depth talk and is dependent on the basis of the connection setup that we will discuss now.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\subsection{Flexibility}
Luker's avatar
Luker committed

A lot of attacks on TLS concentrated on its complexity, its interaction with different version handshakes and ciphers. Thanks to this, new experiments
like CurveCP\footnote{\url{http://curvecp.org/}} and its successor, minimaLT\footnote{\url{http://cr.yp.to/tcpip/minimalt-20130522.pdf}} have tried to throw away all flexibility and concentrate on a single encryption algorithm. This made the protocols much more efficient and simple, but a single failure in the
cipher or protocol will mean that these protocols will have to be thrown away.

For this, and since the reason that TLS can still be used today is exactly thanks to its flexibility in both specification and cipher choice, Fenrir will have
multiple handshakes and use multiple ciphers. Moreover, there currently exists no stable, proven algorithm that can withstand quantum computer
attacks, so fixing ourselves on one or on a single set of choices might significantly lower the protocol life.


Luker's avatar
Luker committed
\subsection{Trust model}\label{trust_model}\index{Trust Model}
Luker's avatar
Luker committed

Every cryptographic system is based on the distribution of symmetric or asymmetric encryption keys. Distribution of keys means that we will have to trust
someone, as that someone will be able to forge and distribute keys that will completely undermine any system.

The most widely spread approaches to this are the PGP \textit{web of trust} and the \textit{public key infrastructure}
used with X.509 certificates by TLS and others.

The problem with the web of trust, is that trust is transitive: if I trust one key, then I'm implicitly trusting all the keys signed by that key. This means that
compromising a single key can lead the whole web to crumble. It also means that to know if a key is trusted, we need to traverse the whole web of trust.
This is not efficient, it vastly increases the attack surface and it also means that the validity of a signature depends on the web of trust we are using
(that is: there is no global state).\\
This approach has only been used by relatively small communities who can manually check the validity of the certificates.

The public key infrastructure works by trusting one or more central authorities that have the explicit task of verifying whether a certificate is owned by
it rightful owner. The problem with this approach is the reliability of said certificate authorities. In the past big organizations like COMODO had their
certificates stolen, or had lax security so that anyone could get a certificate even when they were not supposed to get one. Since the common computer
holds more than a hundred different authorities and since these authorities are located all over the word, even in countries with oppressive regimes,
this model is seen as lacking in trust. Still, it's the standard for trust systems nowadays.\\

Whatever the choice, a public key need to be transmitted with a format. Historically this format has been X.509. This is a big standard, dating back to 1988,
that received multiple corrections which made it very difficult to parse, as shown by the sizes of current parsers : the polished and simplified parser of libreSSL is currently more than 10.000 lines of C code, while the openSSL and GnuTLS versions are 25k and 35k lines respectively. This has been a real problem
that has been exploited multiple times in both he basic libraries (*SSL) and the programs that use these libraries.


There does not appear to be a solution for all the technical and political problems behind this, and as such Fenrir tries not to choose any of the two models.\\
The web of trust is hard to keep secure and is slower, while the public key infrastructure based on current certificate authorities is as secure for the
average user, but as good as broken for a lot of governments. The only choice Fenrir tries to enforce thus is to stop using the difficult X.509 certificate format,
and fall back to handling raw public/private keys.

Still, we need a way to get the public keys safely. The public key is the minimum and most important piece of information we need to safeguard, but we
might want to protect other data, such as the IP address resolving to the authentication server. Fenrir will not force a single method to get the necessary
information, as new and more secure systems might come out in the future, or some other trust model might be required.

Luker's avatar
Luker committed
\subsubsection{Connection information needed}
Luker's avatar
Luker committed

To setup a connection with TLS first we need to resolve the ip address of the server, then we connect to a specific port (e.g.:80/tcp for web) on the given
ip, then we get the certificate, check it against out trust model and server fqdn, then we can connect. This flow is very basic, but we can start by noting
a couple of things:
\begin{itemize}
\item the fqdn resolvers almost never use DNSSEC, and in fact the ip address is often intercepted and modified to support transparent proxies. Even if we
used DNSSEC, we would be leaking information such as whom we are connecting to.
\item the 80/tcp port identifies the service we are using. While the web is now ubiquitous, we are still leaking some of the protocol identification.
\item the certificate does not protect ip address(es), ports, and leaks the service fqdn (this is needed in TLS to support different certificates for the same
server).
\end{itemize}

Although difficult to exploit, we are leaking a lot of information, and we have to leak it due to the legacy of the internet development.
Efficiency has also suffered, as adding multiple fqdn-to-ip resolutions to the DNS systems make it necessary for the DNS packets first to increase
from a maxium size of 512 bytes to as much as the packet provides, and then we even had to switch to DNS over TCP to support even bigger records.
Such records are needed for load balancing, failover configurations, and integrating other information such as text, binary, encryption eys or signatures
into the DNS system.

Thanks to a clean design, we can now hide more information and increase flexibility and efficiency.

Luker's avatar
Luker committed
\subsubsection{DNSSEC resolution}\index{DNSSEC}
Luker's avatar
Luker committed

DNSSEC was the solution to requiring a secure fqdn-to-ip resolution, but it grately increase the size of each record, as every record brings cryptographic
signatures. However, we have records for everything we need:
\begin{itemize}
\item ip address(es): A, AAAA records
\item ports, ips, failover support: SRV records
\item public keys: specific formats (IPSEC, SSH...) or TXT records
\end{itemize}

As we are not using a common format, we can encode everything we need in a single TXT message, thus saving bandwidth, packets and round trips.
This will need to be encoded as most of this data is binary, so the \textbf{Z85} encoding (which is similar to \textbf{base585}) will be used as the encoding
standard, as we need a text format, but base64 is more inefficient. Z85 is also designed to be safe from common escape sequences (quotes, double quotes, backslash) used in XML and other
Luker's avatar
Luker committed
formats, so we can maintain the encoding format even when experimenting with other directory services.
Luker's avatar
Luker committed

We want to provide information on the authentication servers and even provide load balancing and fallback features.

The format can this be (in binary):
\begin{itemize}
\item \textbf{2 bytes}: udp port (port 0 mean to use raw IP)
\item \textbf{1 byte}: number of address structures, formed as:
  \begin{itemize}
  \item \textbf{1 byte}: bitfield for type and address priorities
    \begin{itemize}
    \item \textbf{0-1}: 00: ipv4 01: ipv6.
    \item \textbf{2-4}: priority group (for fallback)
    \item \textbf{5.7}: preference priority in the same group
    \end{itemize}
  \item \textbf{X bytes}: ip address. size dependent on previous ip type
  \end{itemize}
\item \textbf{Y * 2 bytes}: bitfield of supported authentication algorithms
\item \textbf{2 bytes}: public key type\label{DNSKEYID}
\item \textbf{2 bytes}: public key id
\item \textbf{Z bytes}: public key. size/format dependent on type.
\end{itemize}

This format is dynamic, has all the information needed, it's encoded from binary so we can't be more efficient than this in the DNS system, and
we have all the features that would otherwise require multiple DNSSEC records and signatures. As the published information regards only the authentication
server, and the services ips will be given from the A.S., a futher round of load balancing and failover can be applied directly inside the A.S., coupled
with geo-ip optimizations or further customizations as needed, without affecting the standards.

To better support the handshake algorithms, we can publish a list of supported asymmetric keys in the DNS (variable $Y$ above), that will hold a bitfield
of supported algorithms. when the last bit is 1, one more 16-bit bitfield is added.

We also have a complete decoupling between service and standard udp ports, and thus both our authentication server and services can be at any
port the administrator wants. This will help in setting up multiple services and domains in the same machine, and avoiding firewall problems.

When using elliptic curve cryptography public keys have a much shorter size than RSA, so a single UDP packet should be able to carry the necessary information.

Luker's avatar
Luker committed
\textbf{Warning:}
Luker's avatar
Luker committed

Efficiently packing all this information still results in an answer packet that is much bigger than the request. As we can not put other restraints on the DNS
system, this will result in the DNS system being used for amplification attacks.

Thanks to the structure of the DNSSEC system, however, this is already happening, and multiple servers are already available\footnote{\href{http://dnscurve.org/amplification.html}{DNSSEC amplification}} that provide an
amplification of even a hundred times the original packet. Therefore this should not further increase any threat (as it is already present) and might even
push for a resolution of the DNSSEC amplification problem.




Luker's avatar
Luker committed
\subsection{Perfect Forward Secrecy}
Luker's avatar
Luker committed
\label{pfs}\index{Perfect Forward Secrecy}

\textbf{PSF} is a property of key exchange handshakes that grants that if a whole connection is intercepted today, and the private keys are leaked
tomorrow, the eavesdropped connection will not suffer in security.

This is usually implemented through the generation and usage of \textbf{ephemeral}\index{Ephemeral keys} asymmetric keys, which are used only for one connection and then are
thrown away. These ephemeral keys are then signed by the main public key, so that the receiving end can be sure of the authenticity.

The actual scope of the ephemeral keys is not strictly a single connection. depending on the trust model and synchronization mechanism,
an ephemeral key can span multiple connection during multiple hours; the important part is that this key is easy to replace and is in fact regenerated
constantly. The actual regeneration timeout is dependent on the amount of security we want to grant to the whole system, as longer timeouts will obviously
increase the attacker decryption window.

PFS is currently supported in SSH, TLS (optional, per-connection), CurveCP and minimaLT (timeout-based, multiple connections) and QUIC. QUIC actually
uses a lighter form of PFS, where the initial key exchange is not covered by forward secrecy, but the client can then request an other key exchange to
force the property. This choice was made to lower the amount of round trip necessary to the connection setup, so it sacrifices security for efficiency.

In Fenrir PFS is regarded as fundamental, and as such it is always used, even when it means losing in terms of efficiency.


Luker's avatar
Luker committed
\subsubsection{minimaLT approach}\index{Perfect Forward Secrecy!minimaLT}\index{minimaLT!Perfect Forward Secrecy}
Luker's avatar
Luker committed

MinimaLT introduces a novel way to handle PFS. Instead of having one long-term public/private key, it only uses ephemeral keys, that must be changed at
specified intervals. Every key will be published directly in some directory service (DNS). As the key is short lived, minimaLT can avoid one full round trip
to setup ephemeral key exchanges. This means that the DNS server and the minimaLT server must synchronize, and probably need coherent clocks to publish
keys at the right time.

The longer we keep the same ephemeral key, the lower the security, but higher efficiency is gained through DNS caching. Theoretically this might put more
work on the DNS servers, but in practice changing ephemeral keys every half a day, or even just once a day could be considered enough for not 
security-critical applications. Even for intensive usage, changing keys every couple of hours would still provide decent security and caching.

The practical problem, however, relies in the DNS registrars. Registrars that provide high configurability are rare, to the point where very few records are
available to the end user (even TXT records can be restricted), some other services allow only a couple of unpaid modifications per day, further limiting the
usability of the system. Interactions with service and user is not standard, so it can hardly be scripted, and minimaLT requires keys change at exactly
the same moment in both server and DNS system, so even clocks must be synchronized and reliable.

All of these problems do not arise the moment we want to setup our own DNS system, but requiring custom modifications to a service (DNS) that is not always
under the direct control of the application owner might be excessive.

The biggest drawback, still, is the need to perfectly synchronize the DNS key publishing and service key rotation.


Luker's avatar
Luker committed
\subsection{Handshake}\index{Handshake}
Luker's avatar
Luker committed

In Fenrir 3 different handshakes will be available:
\begin{itemize}
\item \textbf{Full-Security}: includes syncookie-like handshake and a TLS-like handshake. It requires 3 round trips but takes a lot from the most tested
algorithms.
\item \textbf{Stateful Exchange}: 2 round trips (avoids the syncookie), but needs to store a state and thus should be disabled during DoSes.
\item \textbf{Directory synchronized}: like minimaLT we publish the ephemeral keys on the directory service. Only 1 round trip..
\end{itemize}

Perfect forward secrecy will be forced in all handshakes, Although the scope of the PFS (connection or timeout) can change.

MinimaLT and QUIC both support 0-round-trip connection setup. Although it sounds exiting, this is explicitly avoided in Fenrir as we are not able to grant
that a packet has not been spoofed. Although an attacker that can intercept all network packets can keep a working connection even when spoofing the ip
address, we should not put all devices in the position to exploit ip address spoofing even when they do not have the capability of intercepting all network
packets.
A bigger problem relies in accepting data (aka:service commands) on the first packet: it can create big amplification attacks, for example when requesting a
download for an other ip address, as our service will fill the congestion window for a non-confirmed ip address.

Since we are designing from scratch a protocol, we should learn from the TLS system, and try to hide all information possible, even including the service
type and fqdn that the client is trying to access. This information is always in plaintext in TLS, as the tcp port tells us which service is being accessed, and the
fqdn of the service is in cleartext in the certificate, to let the server and client support SNI (Server Name Identification) which lets servers use multiple certificate
on the same TLS socket.

Thanks t the format specified in \ref{DNSKEYID}, we can support multiple public keys at the same time, and they are only related to the authentication server,
which means that an eavesdropping attacker will not be able to detect which service or domain the client wants to connect to, as multiple domains
can be handled by the same authentication server. The key id specified in the server will also let us handle any delays between publishing the new key
and deleting the old one, as some other clients might not have invalidated their caches yet.\\

We will now look more in-depth to the possible handshakes:


Luker's avatar
Luker committed
\subsubsection{Full-Security}\index{Handshake!Full-Security}
Luker's avatar
Luker committed

This is the most secure handshake, designed for security instead of efficiency.
\begin{itemize}
\item \textbf{RT n.1} :syncookie exchange to avoid DoS, supported algorithms.
\item \textbf{RT n.2}: exchange of supported algorithms, ephemeral keys and key exchange.
\item \textbf{RT n.3}: authentication and authorization
\end{itemize}

This key exchange looks a lot like the classic TLS one, and the client needs a full 3 round trips to complete the authentication. It is also the slowest method,
thus the need for more efficient handshakes.

Perfect forward Secrecy is enforced at round trip n.2.


Luker's avatar
Luker committed
\subsubsection{Stateful Exchange}\index{Handshake!Stateful Exchange}
Luker's avatar
Luker committed

When efficiency is needed, this handshake can be used. Although it takes one less round trip, it still needs to store a state right from the first received
message. Therefore this method should be available only when few connections per second are created, and should be automatically disabled when
under heavy load for concerns of a possible memory DoS.

\begin{itemize}
\item \textbf{RT n.1}: algorithm support, ephemeral key from server.
\item \textbf{RT n.2}: client ephemeral key, authentication.
\end{itemize}

The memory DoS problem arises from the need to store the private key in memory during the handshake. If the private key is generated for
every connection, DoSes are easy to cause. However a solution is to simply keep using the same ephemeral key for more connections,
so that only a few keys need to be kept in memory (due to key rotation). This solution however breaks the one-key-per-connection rule.

It should also be noted that the amount of information carried by each packet is now increased, so there is a risk that multiple packets be needed to
transfer the information. This can be a problem in the case of packet loss, but it does not count as a round trip as both the client and server can send
the packets without waiting for the ack.


Luker's avatar
Luker committed
\subsubsection{Directory-Sinchronized}\index{Handshake!Directory-Sinchronized}
Luker's avatar
Luker committed

This is basically the minimaLT approach, revisited.

THe only round trip required in this case comprehends the client's ephemeral key, and user authentication, and the server only answers with a success
or rejected response.

The problem of this solution remains in the synchronization between the directory service and the authentication server. However the Fenrir approach is
much different from minimaLT, in that we only need to synchronize the authentication server, and not all the services. moreover, each public key
of the authentication server has a random identifier attached, so that multiple keys can coexists, making key rotation much smoother, avoiding
relying too much on timeouts and clocks of other machines.

Luker's avatar
Luker committed
\subsubsection{Downgrade protection}\index{Handshake!Downgrade attack protection}
Luker's avatar
Luker committed

We outlined multiple handshakes, but we can not just let them be used independently. One of the (many) problems found in the TLS protocol was its
frailty from downgrade attacks. While the TLS protocol kept being extended to counter new problems and add features, there was no security
mechanism to stop the attacker from performing a downgrade attack.

A downgrade attack is successful when both the client and server support new protocol versions or key exchanges, but the attacker can force on of the two
to use an older (possibly broken) version. In TLS the mechanism was regulated by a timeout. If the attacker could simply drop the initial handshake,
the client would fall back to the older and broken SSLv3\footnote{\href{https://poodlebleed.com/ssl-poodle.pdf}{This POODLE bites: exploiting the SSL 3.0 fallback,  M{\"o}ller, Bodo and Duong, Thai and Kotowicz, Krzysztof}}.

This is obviously a grave error probably made by trying to introduce flexibility after having though about security. The solution in the Fenrir protocol
will thus be to list all the supported encryption primitives, handshakes and protocol versions during the handshake, and signing everything from both
the client and server, so that no such attacks can be performed. Learning from the TLS protocol, the server will be the one deciding which algorithm
should be used, as the client application and devices are very rarely updated constantly by its users and even by its producers (e.g.: android versions on
phones, small IoT devices, cheap devices in general).

Later on, in the formal proof we will purposely include broken handshakes to simulate what would happen when new security discoveries might prove
an algorithm of a whole handshake to be broken. As long as either the client or the authentication server disable a broken algorithm or key exchange, none
of the parties must be allowed to use it.

Luker's avatar
Luker committed
\subsubsection{Spoof/Amplification protection}
Luker's avatar
Luker committed

TCP 3-way handshake, and SCTP syncookie provided a sufficient proof that the other party has control of the IP it is using.

This proof is not without problems: an attacker that controls the network can spoof any IP he wants, and remain undetected, but such an attacker is not the most common type, as it requires control of ISP networks, or it can only spoof a very limited set of ips.

Thus every connection made with the Fenrir protocol will require at least one round-trip. While this is already in the design of the handshakes, there is an other vector that must be considered: session resumption, of change of IP in the case of mobile clients.\\
Since mobile clients are not a special case, any device has the possibility to change IP.

Changing IP without confirming the source with an other round-trip can lead to a bad amplification attack: imagine a client that starts downloading a big file, which then pretends to change IP. The destination will now be flooded with a big amplification attack that even hides the attacker source.

To avoid this, the server will accept packets from all IPs, but when a new IP is to be used, the
client has to send a dedicated "new-IP" packet from that IP, and the server will respond with a
256bit random syncookie. Only when the client returns the cookie, the new IP will be approved.

The syncookie exchange will still happen in the encrypted connection, with the same keys as before,
to avoid setting up a new authentication vector. The "new-ip" packet from the client will be required
to be a full MTU, to avoid having even a small amplification attack of few bytes.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\subsection{Side channel}\index{Side channel}
Luker's avatar
Luker committed

A very good part of the latest attacks on TLS and other protocols are side-channel based, for example on the size of the transmitted data, or on the amount of time necessary to (de)encrypt data.

Although these attacks are more of an issue for implementations than for protocols, Fenrir should try to limit the amount of information gathered from such attacks.

To make data-size based attacks more difficult, packets contain a random amount of data, where the actual maximum amount should be user-defined, as we can interfere too much with the application flow.

In the future Fenrir should also be able to have communication channels that are bandwidth restricted in both maximum and minimum used bandwidth, so that traffic analysis is more difficult.

Fenrir also uses a some header fields (stream id, counter) that can (and should) be randomized in order to increase entropy of the encrypted data and to give attackers less details on the content of the cleartext, so that known-plaintexts attacks get more difficult.


\section{Authentication}

Fenrir is a token-based authentication protocol, so the main identifier for a user will be its
token.

To avoid enumeration attacks, a token should be a 128 bit string of random content.

But the first time the client still needs to pass through 'normal' password authentication.
This poses multiple problems, as:
\begin{itemize}
\item We need to store the password safely in the database
\item We need to somehow limit bruteforce attacks
\item We need to pay particular attention to the information leaked in this phase.
\end{itemize}

Although some of these problems are not strictly part of the protocol definition, they are still common attack vectors, thus we should address them.

The first problem can be easily solved by using the \textbf{Argon2} algorithm (variant Argon2i) recently approved by the \textit{Password Hashing Competition}\cite{PHC} to store the passwords.
Argon2 takes measures to force a certain amount of time and memory usage to compute a salted hash, rendering bruteforce less effective even in the case of a leakage of the user database.

As the user/password combination should be used sparingly (once per each device), the software can severely limit the login attempts, further limiting the online attacks.

A recent attack on the TLS protocol, called \textit{BICYCLE}\footnote{\href{https://guidovranken.files.wordpress.com/2015/12/https-bicycle-attack.pdf}{BICYCLE attack: https://guidovranken.files.wordpress.com/2015/12/https-bicycle-attack.pdf}}, has been able to deduce the length of parts of the encrypted text when a stream cipher is used.
If the attack is focused on password length, the attacker can greatly limit the search space for the password, increasing the efficiency of bruteforce attacks.\\
While this is an attack that must be targeted towards specific users, and can only be used in the specific context of a new device registration, we can completely avoid these problems by hashing the provided user password with SHA3 and only transmitting the hash, which is now constant-length.\\
The same should be done with usernames, to provide better resistance to information leakage.

Luker's avatar
Luker committed

\section{Attacks}
\label{Attacks}

As we have a detailed view of the interaction of the protocol parts, we should now discuss the overall safety of the proposed protocol.

Part \ref{Formal Verification} of this documents is dedicated to the formal verification of the interaction between handshakes and their correctness,
we will therefore concentrate on the overall trust model, assuming the correctness of said algorithms.

The model on which Fenrir is based on is, of course, that the client, authentication server and services of its domain are trusted sources.
This doesn't mean that other authentication servers are fully trusted, so login information must not be leaked to other domains.

First of all, a malicious authentication server can obviously impersonate all its users into its services. That is because both service and authentication
server are controlled (by definition) by the same organization: logs can be forged, user data can be edited. But this is true for any system, so there
is no point in trying to counter this.

However, we can imagine a scenario were a previously trusted authentication server becomes a \textbf{rogue AS} after an attacker manages to compromise
the operative system of said AS. In this case our model does not prevent the AS to impersonate its user, for both its own services and other domains'
services. While we can not do much to avoid the AS impersonating a client in new services, we can stop the AS from impersonating the client into
services where the client has already registered, thus protecting the user data and privacy. The mechanism for this is better explained in the ``anonymous
authentication''\ref{anonymous authentication} section, but basically consists of the client and the other domains' AS exchanging further authentication data.\\

As the name of the model says, we are relying on a \textit{trusted third party} for authentication and authorization. we have a small window of safety even
Luker's avatar
Luker committed
in case of the breach of our third party, but since an AS is central to the whole Fenrir model, much like a Kerberos or active directory server, extra care
Luker's avatar
Luker committed
should be taken in securing this component.\\

A second and more troublesome source of attacks that is rarely considered is the compromise of our trust base, the \textbf{directory service}.
In the web of trust model, this can be the compromise of a trusted key, in the public key infrastructure used in X.509 is the compromise of one of the many
certificate authorities. This last scenario has already happened (the COMODO certificate hacking), but it should be noted that there are a lot of countries
that can secretly force the certificate authorities to forge specific valid certificates, and until those certificates are somehow leaked to the public, no one
would be the wiser.

DNSSEC in Fenrir was not chosen only due to its widespread usage (and testing), but also due to inherent publicity of the system. Forging an answer and
changing the publicized keys will affect lots of people and should be easier to check, as you only need a script constantly checking the public
information against the one given by the user.\\
Although results can be forged for specific users, changes will be much easier to spot. Moreover, personal account changes will be easier to spot as
all logging must pass through the AS, which tracks both logins, devices id and can thus apply further heuristics (like country of origin) to all services,
independently from domain or type of service.




Luker's avatar
Luker committed
\section{Credentials and Authorization}
\label{FederationAlgorithm}
Luker's avatar
Luker committed
\label{Credentials}\index{Authentication}

Luker's avatar
Luker committed
\lettrine{A}{uthentication} and authorization are the first things that we will need to preoccupy ourselves with after setting up a secure channel.
We will further introduce a new authorization technique to help the user limit account access to applications and whole devices. Finally we will break the
total trust that the user had to give to the authentication server, decreasing the tactical value of an hacked authentication server, that will no longer
be able to impersonate the client into existing services.
Luker's avatar
Luker committed

\section{Authentication}\index{Authentication}

Even (and especially) here Fenrir must be flexible, and being able to support different authentication mechanisms (passwords, OTPs, certificates or other).

One of the limitations of current protocols is that we are forced to authenticate at the beginning of the connection, during the handshake.
Although this mode must obviously be supported, if we limit the login to the handshake we force the developer to make another connection and drop the
previous one. While this is not terribly complicated, it still is something that is not usually done, and it is the reason why TLS authentication is not used and
multiple protocols stacked on top of it have to reimplement the authentication. To avoid duplication of features and more stacking of protocols, Fenrir
should provide the ability to drop unauthenticated streams and switch to an authenticated connection.

This is especially useful for web sites, were the users visits the homepage while unauthenticated, and then logs in without having to set up a second connection.
\\
\subsection{Application Authentication}

A novelty introduced by OAuth is the authentication of applications. This means that the application binary has to provide a sort of username and password
to the service before being recognized, and before providing user login information. This means that the application credentials are stored statically inside
the application binary. This means that almost all applications will store the data in plaintext, while other will try to obfuscate it, probably unsuccessfully.

Although such an application authentication is inherently flawed and with few use cases (so few that OAuth2 introduced mechanisms where this was
not needed), it does not mean that the concept is useless. We can shift the application authentication to become a \textit{device} authentication.
This means that the user can now control which and how many devices are connected to its account. The main purpose of this device authentication in fact
is not to limit the number of user devices, but only to identify different devices and to give scope to each device. This will let the user know if his credential
have been stolen, as there will be new devices connected to the account, and it will let the user limit the device authorization so that a device that can be
lost easily, like a cellphone, has only a read access to important accounts (like banking accounts).

Moreover, by controlling device authentications, we can easily give scope to each authorization and drop all authorizations given to a single device
without affecting other devices' authorizations.

\subsection{Token}\index{Token}

One of the easiest ways to break into any account is to try to guess commonly used passwords. Thanks to the design of Fenrir, instead of continuously
using weak passwords we can use hard-to-guess \textbf{tokens}. After the initial login, token will be used for everything, from application
authentication to user authorization. This design will drop the reuse of password by the users, which means that they might be willing to use
more difficult passwords, as they need the passwords \textit{only once}.

It should be noted that the connection is either anonymous or tied to an authenticated user. This means that the token will only be used during authentication.
Luker's avatar
Luker committed
Although this seems obvious, it is a big difference from other protocols like Kerberos or OAuth, were the developer is provided a token that represents
Luker's avatar
Luker committed
its authentication and has to transmit the token at every request.\\
This last approach is more flexible as it can span multiple protocols and scopes, but it decreases security as the developer has to pay attention not to leak
the secret token. Developers would have to pay much more attention to security and information leaks, while the vast majority can barely be trusted with
writing a stable application, so by tying the token to the connection we simplify application security and take it out of the developers' hands.


\subsection{Single sign-on}\index{Single sign-on}

Singe sign on mechanisms provide the user with the ability to login once and be automatically logged into different services of a single vendor.

While it looks like a very useful feature, it is not very flexible, as you also need to log out of every service before loggin in with a different user, and it
requires a global state. As we want to provide the user with the ability to handle multiple accounts, guessing the right account to load in an application
which might have been started in the background (think about smartphones) is neither easy not recommended.

But ease of use is still a requirement, so Fenrir will switch to a one-click login model instead. Th ideal workflow is:

\begin{itemize}
\item the client (e.g. the application) will tell the client manager that it want to login somewhere.
\item the client manager asks the users if it should login at the given location, and with which account.
\item after authentication, the client manager gives only the connection id and keys to the client.
\end{itemize}

This means that the user will always be presented with a familiar and consistent login question, whatever service he might log into, and that the
application will never see passwords nor usernames nor authentication tokens, thus excluding the application from security and user and password
management.

More advanced client managers might learn choices, or provide suggestions for the applications, so flexibility is maintained without nagging the users.


\section{Authorization Lattice}\label{AuthorizationLattice}\index{Authorization!Lattice}\index{Lattice}


Aside from authentication, one feature a lot of protocols try to include is a way to limit the scope of the user authentication, that is: \textbf{authorization}.

The first thing that needs to be done is to somehow list the possible privileges, then during authentication we can tie a token to a specific privilege,
and thus limit the application capabilities.

This by itself is already included in OAuth, but there is no standard way to manage privileges and in some cases the application has to restrain itself,
Luker's avatar
Luker committed
which is not really safe if we do not trust the application.
Luker's avatar
Luker committed

Fenrir will therefore include a standard way to list authorizations, and multiple levels of enforcement: by taking advantage of the 3rd-party model,
we can make the authentication server associate a maximum privilege (and no more) to a token, and then that token can be further limited in privilege
by the client manager. As the final application does not interact with tokens or authentication in any way, it is forced to use the limited level of
authentication that it has been provided with, thus ensuring our account safety with untrusted applications.

This model however requires different privileges to be ranked one relative to the other. in mathematical term, we are looking at a \textbf{complete lattice},
as it is the only structure that grants our requirements.

\begin{figure}[h]
\centering
\begin{tikzpicture}[-,>=stealth',auto,node distance=1cm]
  \tikzstyle{every state}=[fill=none,draw=none,text=black]

  \node[state]	(B) {Bottom};
  \node[state]	(R) [above left of=B,node distance=2cm] {Read};
  \node[state]	(M) [above left of=R,node distance=2cm] {Modify};
  \node[state]	(A) [above right of=R, node distance=2cm] {Add};
  \node[state] 	(W) [above of=R, node distance=3cm] {Write};
  \node[state]	(T) [above right of=W, node distance=2cm] {Top};
  \node[state]	(I) [right of=A, node distance=2cm] {Account info};
  
	\path (B) edge node {} (R)
		(R) edge node {} (A)
		(R) edge node {} (M)
		(A) edge node {} (W)
		(M) edge node {} (W)
		(W) edge node {} (T)
		(B) edge node {} (I)
		(I) edge node {} (T);
\end{tikzpicture}
\caption{example of a complete lattice of privileges}
\label{example of a lattice}
\end{figure}

The mathematical definition of a complete lattice is a partially ordered set in which all subsets have both a supremum (top) and an infimum (bottom).\\
Thanks to each pair of privilege having an upper element that has both privileges and a lower level with the least privilege in common it will be easy to quickly
verify whether a token that was tied to a privilege can be used with the privilege that the client manager is asking, or can be further limited if the client
manager decides so.

We now have a hierarchical definition of privileges, which can be enforced independently of the application, which is tied directly into the protocol and
is in complete control of the user. The only downside is that as each lattice of privileges is specific to an application, we will need to transfer the
lattice between the service, the authentication service and the client manager in order to let the user work with lattices of every service.

However this is needed only on the first connection (the lattice can be cached), and is not needed for anonymous connections as there is no user to
authorize.

We should still be wary of rouge services building pointlessly big lattices, so there should be a limit to the size of the lattice to avoid DoSing
authentication servers and client managers.


\section{Advanced Authentication}\index{Authentication!Advanced}

The authentication we provided is still kind of basic, and we can be much more flexible and secure.
By introducing anonymous and direct authentication we will increase the points of failure needed to tie one account to a person, while keeping
verifiability of the account and increasing the points of failure needed to successfully compromise an account.

\subsection{Anonymous Authentication}
\label{anonymous authentication}\index{Authentication!Anonymous}


This contradiction in terms can actually be though more as an authentication with undisclosed information (login included).
Luker's avatar
Luker committed
This is something that is useful when we do not want to give out our username (which is also our email) to a third service, but we still want to be able
Luker's avatar
Luker committed
to access the services.

The obvious implementation is the automatic request and creation of temporary accounts, or \textbf{account aliases}\index{Account Aliases}

The only way to use account aliases however is to permanently associate an alias with an account, so any leak which connects the original account
to the temporary one would breach the client's anonymity.

Keeping in mind how the federation works (see \ref{FederationExample}), in an example where \textit{client@example.com} connects to a service in the
\textit{domain.com} domain, the authentication server for `\textit{domain.com} only needs to remember that one of his accounts can be accessed by
the \textit{example.com} domain. The authentication server for \textit{example.com} will then confirm or deny that the account at \textit{domain.com}
can be accessed by the client's account.

This system decouples the client's login from the authentication into other domains, so the client can use temporary usernames that can last for
as little as the authentication process itself.


\subsection{Direct Authentication}\index{Authentication!Direct}

One problem of all the Fenrir system is that we are putting a lot of trust in the authentication server. While this lets us create a lot of interesting new
scenarios, it creates a very big single point of failure. As time passes the authentication server implementations will become more and more resilient
to attacks, but the risk of a compromised authentication server is too great.

To solve this problem we can introduce an automatic, lightweight second layer of authentication that does not require user intervention. We can go one step
further and make it direct between client and service, a form of \textbf{direct authentication}.

The idea is that the clients will share a secret key directly with the service on which the clients logs into (non-anonymously). If we can manage this,
then the compromise of a whole authentication server will let the AS impersonate the client in services where the client has not yet logged in (as it can just
pretend to be a new client), but the existing user accounts an data will be safe, as the hacked authentication server will not be able to guess the shared
key between client and service.

As a further precaution, since the shared key handling is automatic, the shared key should be OTP (hash
Luker's avatar
Luker committed
based\cite{Lamport:OTP}). This extra step will ensure additional safety and will break the completely reliance on the authentication server.
Luker's avatar
Luker committed

\subsubsection{Trusting trust with direct authentication}

Once the client and service have a shared key, it should become impossible even for the authentication server to break the encryption between the two,
as even man in the middle attacks by the rogue authentication service will be stopped.

However, when first sharing the key the authentication server can create a phony service, let the client exchange keys with that, and effectively use a
man-in-the-middle attack.

The only solution to avoid this problem is to publicize the public key of the services in a way similar to what we have done with DNSSEC. The service will then
sign the generated data that will be relied to the client via the authentication service, thus assuring the client of the impossibility of man-in-the-middle attacks.
This will however break the previously stated feature that the identity of the accessed service will be kept secret.



\section{Authentication summary}

Thanks to the direct authentication, we have broken away from relying only on the authentication server. Our model is now a hybrid between a
third-party security model and a purely client-server one.

To successfully compromise an account now an attacker needs to compromise both the authentication server and the service it wants to access. Which
means that we have fully removed our single point of failure from our protocol.

A single point of failure still exists in the form of the trust system used by the protocol to publicize the public keys of both the authentication server
and the service. If this system (like DNSSEC) does not grant privacy, the Fenrir protocol will not be able to grant that the accessed service is anonymous.
If the trust system is compromised, the attacker can just point to their own authentication servers and services, thus impersonating the user into \textit{new}
services. It should be noted that even when the attacker compromises the trust system, thanks to our direct authentication it will not be able to
impersonate the client in services where the client was registered earlier, thus granting user data.

At the end, we have created a system where a lot of trust was given to the authentication server, but \textbf{even in the case where the trust system and the
authentication servers are compromised, the user data inside the service remain remain safe}, provided that the user registered to the services
before the authentication server was compromised.\\

\subsection{Putting it all together: an authentication flow example}\index{Authentication!Example}

Until now we have been talkative about the protocol, presenting existing solutions, their defects, and slowly modeling our protocol around the
good and bad choices made by current protocols, while introducing a couple of innovations, like the authorization lattice\ref{AuthorizationLattice}.

A formal, verifiable model will be presented in part \label{Formal Verification}, we now want to look at the big picture, at a working example of all the
features we have introduced so far.

In our examples, we will have a client, \textbf{client@example.com}, connecting to the a service of the domain \textbf{domain.com}

First of all, DNSSEC will be used as the source of our trust. although this system has many problem (no privacy, caching delays, political problems), it is
the only source of trust currently available and largely compatible with today's infrastructure.

The very first thing the client manager will do is connect to its authentication server and login. This connection is persistent and is needed to synchronize
data and tokens between the client manager and the authentication server. As this step is the same of the example below, we will assume this connection
is always present, without going into details.\\

When an application \textit{X} wants to connect to a service in the \textbf{domain.com} domain, it will tell its client manager that it want to connect to