Skip to content
Project.tex 76.2 KiB
Newer Older
Luker's avatar
Luker committed
% !TeX root = Fenrir.tex

Luker's avatar
Luker committed
\xkcdchapter{The Fenrir Project}{standards}{The good thing about standards is that\\there are many to choose from...}
\label{Fenrir Project}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\lettrine{A}{t} the current time the most stable and standard protocol pile to use is TCP-TLS-HTTP-OAuth2, but a lot of useful features, like federation support and
interoperability are lost on this solution.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Unless we want to try to authenticate over an insecure connection, and handle by hand that connection security, which is not something many developers
are able do do correctly, we are limited to one non-multiplexed secure connection, and it must be over HTTP.

There are interesting solutions like QUIC, but the experimental, non-standardized status, has kept away the authentication protocols developers,
so that only highly security conscious developers might be able to implement such a solution, at the expense of portability.

A lot of efficiency is lost due to multiple handshakes (TCP, TLS, OAuth have their own handshakes), since each protocol can not get any properties
(especially security properties) out of the lower-level protocols. This leads to longer connection times and and increased attack surface of the
various services.

\section{The solutions}

\subsection{High level protocol}

If we try to fix the situation with another high level protocol (like OpenID-Connect is trying to do) we gain ease of implementation due to the
abstractions of the lower level protocols and their security properties, but we are also limited by them. Efficiency is also greatly impacted, and we
might have to rely on other protocols to avoid the limitations of the protocol pile we choose (like OpenID-Connect has to rely on webfinger to avoid OAuth
non-interoperability).

This means that our quest for simplicity will lead to a contradiction, as more protocols need to be used, the attack surface increments, and we need to
handle all the protocol interactions and limitations.

As stated, this is the road chosen by the OAuth and OpenID-Connect authors, so there is little to gain from choosing it again

\subsection{Low level protocol}

At the first glance this is much more complex, as we need to reimplement everything from TCP up to OAuth in a single solution, but we can gain a lot
of features from experimental protocols, and add the federation and authorization support, which is found virtually nowhere, so we gain in:

\begin{itemize}
	\item \textbf{Efficiency}: handshake data can be secured from TCP resets and can include authentication data, no multiple handshake or multiple
	chain of trust controls.
	\item \textbf{Federation}: we can finally design the protocol so that authentication on multiple domains is the same, by including domain discovery techniques.
	\item \textbf{Authorization}: we can design the system so the user can force an application to a lower level of authorization if the application is not trusted.
	\item \textbf{Additional features}:
	\begin{itemize}
		\item \textbf{transport flexibility}: multistream support, and choosing every stream transport features will increase application features while simplifying them
		\item \textbf{multihoming}: finally design a protocol whose connection status is not dependent from layer 3 (IP) data.
		\item \textbf{multicast}: including this will greatly simplify application development and content delivery
		\item \textbf{datagram}: handling message begin/end despite of packet data splitting will simplify user data management
		\item \textbf{uniformity}: transport, authentication get fused together, the application should be decoupled from user authentication, simplifying and securing
		existing solutions
	\end{itemize}
\end{itemize}

This seems obviously more work, but the overall code used for the whole protocol pile will be much less, thus reducing the attack surface.

~\\

As we are talking about a new, experimental protocol anyway, the obvious choice should be this one. To avoid the SCTP/DCCP mistakes, the protocol will
need to be able to work both on top of UDP (for bypassing firewall and NAT problems) and directly on top of IP (for efficiency) seamlessly, so we should also take
into account a transitional phase between UDP based and IP-based transport.

Again, attack surface will be reduced, especially after the code base stabilizes, and there will be no need to analyze the interaction between multiple protocols,
thus simplifying the development phase.





\section{Summary}

After having looked at the current situation, we stick to the cleanest solution, which is the creation of a new low-level protocol that spans
Luker's avatar
Luker committed
multiple layers of conventional the ISO/OSI pile. This choice will obviously initially increase the amount of work needed to develop our system, but by not
having to rely on old technology we will be able to fully control the security properties of the system for the first time in the development of a protocol.

It will seem that we are creating an overly complex protocol, but when comparing the result with the number of protocols we are aiming at replacing
(TCP/UDP-(D)TLS-OAuth and more) the complexity of our solution will be obviously inferior to the total complexity of the possible interaction of existing
solution, not to mention the lack of analysis of the security properties of the interaction of the different possible user configurations, or their limitation
due to legacy choices.


Luker's avatar
Luker committed
\section{Fenrir}

What we are looking for is something that not only can fully handle the various data delivery options, but mainly something that finally has a good support for
both federation and authorization, along authentication.


\begin{figure}[h]
    \centering
    \includegraphics[width=0.5\textwidth]{images/Fenrir_logo.png}
    \caption{Fenrir Logo}
    \label{fig:Fenrir_Logo}
\end{figure}



\section{Federated Authentication}

The main feature is -obviously- the federated authentication. This means that we will need some form of interaction between the server of multiple
independent domains, with each trusting only their users. Therefore, ueach user will be identified by its user and domain, in email-like format.

For this, we use the distinction provided by kerberos for our system, and we divide the players into three (plus one):

\index{Client Manager}\index{Authentication Server}\index{Federation}
\begin{itemize}
\item \textbf{Authentication Server}: in short: \textbf{AS}. Handles authentication and authorization for its domain.
\item \textbf{Service}: the service the client wants to use. Be it the mail service or a web service.
\item \textbf{Client}: the user program that connects to a \textit{Service}
\item \textbf{Client Manager}: the program that manages authentication data for the user.
\end{itemize}

Luker's avatar
Luker committed
\subsection{Decoupling authentication from application}\index{Authentication!Decoupling}
Luker's avatar
Luker committed

The first two distinctions are fairly straightforward: the authentication server will handle authentication, so that the service can be designed without having
access -for example- to the password database or to the user login. This is an important distinction, as the applications are a lot vulnerable to bugs and
have a much higher attack surface. By decoupling the two user and password databases should be better protected, as the only application that has
access to them is the one especially designed to protect them.

The distinction between \textit{Client} and \textit{Client Manager} has the same purpose. Current applications usually save login information in cleartext,
or in a poorly-obfuscated manner (base64 and alike). For the same reasons as before, we want to decouple application authentication from the
application itself. This will permit the system-wide usage of strong authentication methods like security tokens or smart cards, provide better support
for authentication algorithms, instead of having clients rely to old methods like the deprecated SSLv3, and over time will provide better security for
the next legacy applications, as the security of the application will be upgradable regardless of the application itself.

Decoupling authentication from the application will have one more interesting outcome: as the \textit{Client Manager} will handle both authorization and
authentication, it will be able to limit the application scope, so that the user will be able to limit the applications it does not trust, or that only need to
check for the existence of an account.

This means that both the \textit{Authentication Server} and \textit{Client Manager} will be the primary subject of the attacks towards Fenrir, but this also means
that the attack surface will be much lower, and the efforts can be concentrated in a single software.\\
As popular software is migrating towards the web, this situation is increasingly common anyway, as the web browsers need to track each and every user
password, and users do not care about security, which means that the password database will be in clear text; moreover the attack surface of a
browser is huge, especially thanks to its plugin system.

Users have always had a bad connection with security. Forcing them to use and remember hundreds of different passwords will obviously result in
password reuse, weak passwords and general adversity towards security. Collapsing hundreds of password to a single login (thanks to federation)
and hardening the client manager will create a single point of failure, (which is already present in the form of browsers), but will increase the difficulty
level needed for attacks to succeed, therefore increasing overall security.\\

\subsection{The algorithm}

The in-depth algorithm is discussed in chapter \ref{FederationAlgorithm}, only an outline is presented here.

Federated authentication algorithms are nothing new. As per our earlier distinction, we will focus on a kerberos-like infrastructure.\\
Due to the decoupling previously introduced, our protocol needs some way to tell the various interacting players that a user has a certain authentication and
authorization. This is done through the use of a token, so that login and passwords will not reach the \textit{Client} or the \textit{Service}.\\


One characteristic of a lot of authentication algorithms is the usage of timestamps to protect the authorization tokens or general messages.
While this provides safety by putting an expiration time on the usage of said tokens, it also means that applications, servers and authentication servers must
have at least a loosely-synchronized clock.

Although nowadays clock synchronization seems easy and widespread, it is not yet at a state were we can safely assume that the clocks are little discrepancy.
Embedded devices are still produced without a clock source, so each time they are booted the clock is reset to 1970. The most famous clock synchronization
protocol (NTP) is used almost always in clear text, and basing our clock on the attacker's request is not pretty.\\
Requiring all clocks in the world to be synchronized between a couple of minutes from each other, even for devices that do not have stable clock
sources is in our opinion wrong. Therefore Fenrir will \textit{not} use timestamps. This will mean that occasionally additional round trips will be needed
to check the validity of the data, but this also means that tokens can be simplified, as they do not need signatures anymore, and a token revocation will be
effective immediately.

As the figure below describes it, the protocol will relay heavily on the authentication servers, which will act as a trusted third party. The image is only
an outline, and assumes a shared token between client@example.com and the authentication server on example.com


\label{FederationExample}
\begin{figure}[h]
\centering
\begin{framed}

\centering
\begin{tikzpicture}[node distance=4cm,>=stealth]
\node[label=above:{Auth.Srv example.com}] (AS1) {\includegraphics[width=2cm,keepaspectratio]{images/auth_server.png}};
\node[label=above:{Auth.Srv domain.com}, right= 3.5 cm of AS1] (AS2) {\includegraphics[width=2cm,keepaspectratio]{images/auth_server.png}};
\node[below of=AS2,left=2.5cm of AS2, label=below:{Client example.com}] (C) {\includegraphics[width=2cm,keepaspectratio]{images/computer.png}};
\node[below of=AS2,right=1.5cm of AS2, label=below:{Service domain.com}] (S) {\includegraphics[width=2cm,keepaspectratio]{images/server.png}};

\draw[<-,thick] (AS2.180) -- node[below]{$1: auth, use ``service''$} (C.90);
\draw[<->,thick] (AS1.30) -- node[below]{$2: check account$} (AS2.150);
\draw[->,thick] (AS2.340) -- node[right]{$3: new user: id, keys$} (S.60);
\draw[<-,thick] (AS2.300) -- node[below]{$4: user registered$} (S.120);
\draw[->,thick] (AS2.250) -- node[below]{$5: ok: ip, keys$} (C.20);
\draw[<->,thick] (C.340) -- node[below]{$6: communication$} (S.200);
\end{tikzpicture}
\end{framed}
\caption{Fenrir overview: client@example.com connects to ``service'' in domain.com}
\end{figure}

The diagram describes the interaction of different domains as the flow is obviously simplified in the case of single domain logins.

The service will not receive the login data, and in fact will only get a connection id, cryptographic keys and an internal user id. As confirmation, the client
Luker's avatar
Luker committed
will receive the ip, connection id and keys to connect to the service, so that no other handshakes are needed. Moreover, since the client receives a notification
only when the connection has been confirmed between the authentication server and service, we avoid non intuitive errors like ``authenticated but not
connected'', typical of protocols like kerberos or oauth, were the authentication server is too detached from the service.
Luker's avatar
Luker committed

The authentication server will receive a lot of trust from the user, as it needs to control its authentication and authorization data, but it will not be able to
impersonate the user on different domain's services, as the service and the user will also share a secret key. There will still be room for impersonating a
client on the same domain, although that is technically unavoidable with any design due to the service and the authentication server belonging to the same
organization (authentication and logs can always be forged by administrators that control the services). More on this on section \ref{Attacks}



Loading full blame...