Newer
Older
% !TeX root = Fenrir.tex
\part{The project, Fenrir}[standards.png][The good thing about standards is that there are a lot to choose from...]
\chapter{Overview}
\lettrine{A}{fter} having looked at the current situation, we stick to the cleanest solution, which is the creation of a new low-level protocol that spans
multiple layers of conventional the ISO/OSI pile. This choice will obviously initially increase the amount of work needed to develop our system, but by not
having to rely on old technology we will be able to fully control the security properties of the system for the first time in the development of a protocol.
It will seem that we are creating an overly complex protocol, but when comparing the result with the number of protocols we are aiming at replacing
(TCP/UDP-(D)TLS-OAuth and more) the complexity of our solution will be obviously inferior to the total complexity of the possible interaction of existing
solution, not to mention the lack of analysis of the security properties of the interaction of the different possible user configurations, or their limitation
due to legacy choices.
\section{Fenrir}
What we are looking for is something that not only can fully handle the various data delivery options, but mainly something that finally has a good support for
both federation and authorization, along authentication.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{images/Fenrir_logo.png}
\caption{Fenrir Logo}
\label{fig:Fenrir_Logo}
\end{figure}
\section{Federated Authentication}
The main feature is -obviously- the federated authentication. This means that we will need some form of interaction between the server of multiple
independent domains, with each trusting only their users. Therefore, ueach user will be identified by its user and domain, in email-like format.
For this, we use the distinction provided by kerberos for our system, and we divide the players into three (plus one):
\index{Client Manager}\index{Authentication Server}\index{Federation}
\begin{itemize}
\item \textbf{Authentication Server}: in short: \textbf{AS}. Handles authentication and authorization for its domain.
\item \textbf{Service}: the service the client wants to use. Be it the mail service or a web service.
\item \textbf{Client}: the user program that connects to a \textit{Service}
\item \textbf{Client Manager}: the program that manages authentication data for the user.
\end{itemize}
\subsection{Decoupling authentication from application}\index{Authentication!Decoupling}
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
The first two distinctions are fairly straightforward: the authentication server will handle authentication, so that the service can be designed without having
access -for example- to the password database or to the user login. This is an important distinction, as the applications are a lot vulnerable to bugs and
have a much higher attack surface. By decoupling the two user and password databases should be better protected, as the only application that has
access to them is the one especially designed to protect them.
The distinction between \textit{Client} and \textit{Client Manager} has the same purpose. Current applications usually save login information in cleartext,
or in a poorly-obfuscated manner (base64 and alike). For the same reasons as before, we want to decouple application authentication from the
application itself. This will permit the system-wide usage of strong authentication methods like security tokens or smart cards, provide better support
for authentication algorithms, instead of having clients rely to old methods like the deprecated SSLv3, and over time will provide better security for
the next legacy applications, as the security of the application will be upgradable regardless of the application itself.
Decoupling authentication from the application will have one more interesting outcome: as the \textit{Client Manager} will handle both authorization and
authentication, it will be able to limit the application scope, so that the user will be able to limit the applications it does not trust, or that only need to
check for the existence of an account.
This means that both the \textit{Authentication Server} and \textit{Client Manager} will be the primary subject of the attacks towards Fenrir, but this also means
that the attack surface will be much lower, and the efforts can be concentrated in a single software.\\
As popular software is migrating towards the web, this situation is increasingly common anyway, as the web browsers need to track each and every user
password, and users do not care about security, which means that the password database will be in clear text; moreover the attack surface of a
browser is huge, especially thanks to its plugin system.
Users have always had a bad connection with security. Forcing them to use and remember hundreds of different passwords will obviously result in
password reuse, weak passwords and general adversity towards security. Collapsing hundreds of password to a single login (thanks to federation)
and hardening the client manager will create a single point of failure, (which is already present in the form of browsers), but will increase the difficulty
level needed for attacks to succeed, therefore increasing overall security.\\
\subsection{The algorithm}
The in-depth algorithm is discussed in chapter \ref{FederationAlgorithm}, only an outline is presented here.
Federated authentication algorithms are nothing new. As per our earlier distinction, we will focus on a kerberos-like infrastructure.\\
Due to the decoupling previously introduced, our protocol needs some way to tell the various interacting players that a user has a certain authentication and
authorization. This is done through the use of a token, so that login and passwords will not reach the \textit{Client} or the \textit{Service}.\\
One characteristic of a lot of authentication algorithms is the usage of timestamps to protect the authorization tokens or general messages.
While this provides safety by putting an expiration time on the usage of said tokens, it also means that applications, servers and authentication servers must
have at least a loosely-synchronized clock.
Although nowadays clock synchronization seems easy and widespread, it is not yet at a state were we can safely assume that the clocks are little discrepancy.
Embedded devices are still produced without a clock source, so each time they are booted the clock is reset to 1970. The most famous clock synchronization
protocol (NTP) is used almost always in clear text, and basing our clock on the attacker's request is not pretty.\\
Requiring all clocks in the world to be synchronized between a couple of minutes from each other, even for devices that do not have stable clock
sources is in our opinion wrong. Therefore Fenrir will \textit{not} use timestamps. This will mean that occasionally additional round trips will be needed
to check the validity of the data, but this also means that tokens can be simplified, as they do not need signatures anymore, and a token revocation will be
effective immediately.
As the figure below describes it, the protocol will relay heavily on the authentication servers, which will act as a trusted third party. The image is only
an outline, and assumes a shared token between client@example.com and the authentication server on example.com
\label{FederationExample}
\begin{figure}[h]
\centering
\begin{framed}
\centering
\begin{tikzpicture}[node distance=4cm,>=stealth]
\node[label=above:{Auth.Srv example.com}] (AS1) {\includegraphics[width=2cm,keepaspectratio]{images/auth_server.png}};
\node[label=above:{Auth.Srv domain.com}, right= 3.5 cm of AS1] (AS2) {\includegraphics[width=2cm,keepaspectratio]{images/auth_server.png}};
\node[below of=AS2,left=2.5cm of AS2, label=below:{Client example.com}] (C) {\includegraphics[width=2cm,keepaspectratio]{images/computer.png}};
\node[below of=AS2,right=1.5cm of AS2, label=below:{Service domain.com}] (S) {\includegraphics[width=2cm,keepaspectratio]{images/server.png}};
\draw[<-,thick] (AS2.180) -- node[below]{$1: auth, use ``service''$} (C.90);
\draw[<->,thick] (AS1.30) -- node[below]{$2: check account$} (AS2.150);
\draw[->,thick] (AS2.340) -- node[right]{$3: new user: id, keys$} (S.60);
\draw[<-,thick] (AS2.300) -- node[below]{$4: user registered$} (S.120);
\draw[->,thick] (AS2.250) -- node[below]{$5: ok: ip, keys$} (C.20);
\draw[<->,thick] (C.340) -- node[below]{$6: communication$} (S.200);
\end{tikzpicture}
\end{framed}
\caption{Fenrir overview: client@example.com connects to ``service'' in domain.com}
\end{figure}
The diagram describes the interaction of different domains as the flow is obviously simplified in the case of single domain logins.
The service will not receive the login data, and in fact will only get a connection id, cryptographic keys and an internal user id. As confirmation, the client
will receive the ip, connection id and keys to connect to the service, so that no other handshakes are needed. Moreover, since the client receives a notification
only when the connection has been confirmed between the authentication server and service, we avoid non intuitive errors like ``authenticated but not
connected'', typical of protocols like kerberos or oauth, were the authentication server is too detached from the service.
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
The authentication server will receive a lot of trust from the user, as it needs to control its authentication and authorization data, but it will not be able to
impersonate the user on different domain's services, as the service and the user will also share a secret key. There will still be room for impersonating a
client on the same domain, although that is technically unavoidable with any design due to the service and the authentication server belonging to the same
organization (authentication and logs can always be forged by administrators that control the services). More on this on section \ref{Attacks}
\section{Transport}
\subsection{Layer 4: UDP tunnel}\index{Transport UDP}
The SCTP protocol history tells us that introducing new functionalities while being incompatible with existing network infrastructure is like asking to fail.
However it also tell us that a UDP-based protocol can move up to a standalone one, given enough traction. SCTP evolved maybe too quickly from udp based to standalone, firewalls worldwide did not update, and very few applications ended up using it.
Due to these motivations our protocol will be based on top of IP, but will include a UDP as a lightweight tunnel in its main components so that existing
infrastructure will not have problems using it.
Using UDP as a lightweight tunnel permits us to use a single socket for transmission or reception of every connection, without having the kernel track
for us every connection. Firewalls will permit UDP connections, as the DNS system is based on that, and NATs will continue working as always.\\
Having only one socket for everything permits us to handle everything in user space, and we also avoid the hell of connection tracking of protocols
like FTP, that use multiple TCP connections in an attempt to separate data contexts.
Userspace connection tracking will prove useful for an evolving protocol, as we don't have to wait new kernel releases, patches or other to update or test
variations of the protocol. It will also make it easier to port the project to multiple operative systems, as the kernels do not need to be touched.
\subsection{Layer 4.5: Fenrir}
\subsubsection{Connection}
The very first thing we need is something to identify the connection. older protocols use the tuple of source IP, destination IP, source and destination PORT,
but this is unnecessary and spans multiple protocols which should be independent. The solution is simply to use a single identifier as the connection id.
This alone grants independence from the IP layer, enabling us to support multihoming and mobile clients, and all encryption data will be based on this id.
Once we identified the connection we need to check whether it is a legitimate packet. In order, the first things we will do is check if the packet is corrupted
through an error correction code, then check the packet legitimacy by checking a cryptographic header (HMAC like), then finally decrypting the
packet and access its contents.\\
The last two steps can be condensed into one if we use AEAD ciphers (Authenticaed Encryption -- Additional Data).
The algorithm to used will be decided during handshake: this might make us use one full RTT, but unlike minimaLT, we won't be tied to a single algorithm,
so that if problems are found in the future, we can simply shift to a new algorithm instead of throwing away the whole protocol.
With this setup, all data out of the connection id will be encrypted and authenticated. Regarding the encryption, 4 methods are available:
\begin{itemize}
\item \textbf{MAC then Encrypt} : First authenticate the cleartext data, then encrypt everything. Used in TLS.
\item \textbf{Encrypt AND MAC} : authenticate the cleartext, encrypt the cleartext and transmit both the authenticated and encrypted payload in cleartext.
Used in SSH.
\item \textbf{Encrypt then MAC} : Encrypt the cleartext, then authenticate the encrypted cleartext. Used in IPSEC.
\item \textbf{Authenticated Encryption}: new algorithms can authenticate data while encrypting it, so that no additional headers are required.
\end{itemize}
These various methods were analyzed in 2008\cite{Bellare:2008:AER:1410264.1410269}, and in 2009 even the ISO/IEC committee proposed a
standard\cite{ISOIEC19772} where the recommended mode of encryption is \textbf{Encrypt-then-MAC}. Authenticated encryption algorithms were born later
but are regarded to be as secure as the ISO proposed method.
A short summary of the analysis mentioned earlier:
\begin{itemize}
\item \textbf{MAC then Encrypt} : Only the cleartext is authenticated. If the cipher is malleable, the attacker can change both the MAC and the cleartext.
This is what happened in WEP.
\item \textbf{Encrypt AND MAC} : The MAC is not secret, thus can reveal some information on the cleartext (especially in context were data is mostly static).
It does not grant ciphertext integrity.
\item \textbf{Encrypt then MAC} : ciphertext integrity is granted, so cleartext integrity is granted by composition. Authenticity can be verified before
decryption.
\end{itemize}
Due to these reasons, the only modes available in Fenrir will be AEAD and encrypt-then-MAC. An example of a famous AEAD cipher is OCB3\footnote{\href{http://www.cs.ucdavis.edu/~rogaway/ocb/}{OCB Homapage: papers and implemetation: http://www.cs.ucdavis.edu/~rogaway/ocb/}},
which is now an IETF draft.\\
Recently the CAESAR\cite{CAESAR} competition was created to determine a standard from the various AEAD ciphers, the winner is expected to be announced
Loading full blame...