Skip to content
Conclusions.tex 10.1 KiB
Newer Older
Luker's avatar
Luker committed
% !TeX root = Fenrir.tex


Luker's avatar
Luker committed
\xkcdchapter[0.55]{Conclusion and future works}{extrapolating}{Well, the data proves it.}
Luker's avatar
Luker committed
\label{Conclusions}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\section{Conclusions}

Luker's avatar
Luker committed
\lettrine{O}{ur} model finally decouples the development of the application from its security, letting developers concentrate on the application itself, without trying to understand complex security properties.
Luker's avatar
Luker committed
The flexibility of the transmission protocol is as high as being in a local network without firewalls, and fully customizable.

The protocol is future-proof in the sense that downgrade attacks have been ruled out by design (contrary to SSL/TLS), hides many more details than TLS, and supports randomizations that can be used to avoid traffic analysis attacks like BEAST or CRIME.
Luker's avatar
Luker committed

The drawbacks we can identify in the Fenrir protocol when compared to standard protocols are:
\begin{itemize}
\item increased concentration of attacks on the client manager software, requiring higher focus on security in the development of said software
Luker's avatar
Luker committed
\item slightly increased connection setup time on the first connection, as additional checks are required for federation support and due to the strict separation of application/service from their
security
\end{itemize}

But in exchange we get:
\begin{itemize}
	\item greater flexibility in transport modes and features
	\item clear separation of application and security
	\item flexibility in authentication modes
	\item federation
	\item authorization
Luker's avatar
Luker committed
	\item user gets control over its accounts, spanning multiple devices and authorizations
Luker's avatar
Luker committed
\end{itemize}

Luker's avatar
Luker committed
Also of importance is the client-service secret exchange which prevents even a compromised authentication server from impersonating its users on existing services. This improvement alone should shift the attention of attackers away from the authentication server.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\section{Future Works}
\label{future}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\subsection{OTP Token and secrets}
Luker's avatar
Luker committed

Currently the authorization token is just a string of random content. 

The idea is to move the Token to a hash-based OTP\cite{Lamport:OTP}. The hash based OTP can only be used up to a certain number of times, after which it must be regenerated.
Since for each authentication the OTP ID gets lower, the client manager is now able to understand if its tokens have been exposed to some third party, and can thus inform the user of the threat.

It is not clear however how a big loss of packets impacts the authentication yet, as the case as not been studied yet.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Even shared secrets \ref{shared secrets} might be changed from static secrets to OTP. The advantage would be that client managers would immediately recognize unauthorized access to the services, as their secret would no longer match the service's secret or the backup trust secret. It would however require tight synchronization of such secrets between the client managers, possibly making this method unsuitable for use.

Luker's avatar
Luker committed
\subsection{Anonymous Authentication}

This seeming contradiction in terms can actually be though more as an authentication with undisclosed information (login included). It is useful when we do not want to give out our username (which is also our email) to a third service, but we still want to be able to access the services. By hiding the real username, we hope that personal identification will be more difficult and tied to legal requests, instead of merely having to combine different databases.

The obvious implementation is the automatic request and creation of temporary accounts, or \textbf{account aliases}\index{Account Aliases}

Keeping in mind how the federation works (see \ref{FederationExample}), in an example where \textit{client@example.com} connects to a service in the \textit{domain.com} domain, the authentication server for \textit{domain.com} only needs to remember that one of his accounts can be accessed by the \textit{example.com} domain. The authentication server for \textit{example.com} will then confirm or deny that the account at \textit{domain.com} can be accessed by the client's account.

Although easy at first sight, the implications have not been fully studied and therefore it is regarded as a future work.

\subsection{Sub-Protocols}
\index{Sub-protocols}

Having a lot of streams makes it possible to easily multiplex and parallelize multiple data transfers.
This means that we could create a lot of sub-protocols, integrated as plugins in the main software,
so that common tasks such as file transfer, audio/video transfer get relegated to these plugins,
making the development of new applications much more standard-conformant and easier.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
\subsection{Connection id}
Luker's avatar
Luker committed

This field might be moved to a 64 bit field to increase the number of possible connections, include reliable multicast connection tracking and include caching
proxy support.

A connection id of 32 bits means that we can handle more than 4 billion connections. Each connection can be quite complex with multiple streams and
not-yet acknowledged data, so just tracking them all would require hundreds of gigabytes. The worst case is having all $2^{32}$ connections transmitting at
full capacity (116TB/s), and we would need to track all that data, plus stream information... which is way beyond current computational capabilities.

So increasing the connection id counter to 64 bits sounds both extreme and unneeded, but we should consider multicast: The increased connection id space might
mean that the servers can choose a random connection id without much probability of clashing with an other server multicast connection id.

Luker's avatar
Luker committed

\subsection{Multicast}

We have done nothing to support multicast until now, but since Fenrir supports unreliable communications, adding multicast delivery should not be too difficult.

The problem with multicast packets is that a single packet has to be delivered unmodified to multiple destinations.
This means that connection id, encryption keys and other features have to be the same for all recipients.

The big problem is having a consistent connection id.
As the connection ids are usually decided by the receiving party, we need a way to avoid clashing when multiple clients have multiple multicast connections with multiple servers. Malicious intent on the part of clients should be considered, too, and if possible even malicious servers.

Designing the system so that multicast connection ids do not clash, without any kind of client synchronization seems impossible, so we should limit ourselves to lowering the probability of a clash.
Malicious server disrupting other servers' multicast connections is a problem, but we should not relay on client synchronization to change the multicast connection id, since a malicious client is much more probable..

A naive solution would be to tie the connection id to the ip address when multicast addresses are used. However this would break our independence from the lower layers.


Luker's avatar
Luker committed
\subsection{Caching Proxy}
Luker's avatar
Luker committed

The maximum connection id (binary id: $1...1$) should be reserved to include caching features to the protocol.

A big problem with HTTPS right now is the inability to cache encrypted content. As the content is encrypted on the server side, the current CDN (Content
Delivery Networks) require the server to either give up the private keys for key exchange, or to include a protocol to provide the key generated
between the client and server to their network. While this second choice still protects the private keys, it is still basically just a volunteered MITM attack,
and the CDN still has complete control over the client and server traffic.

This situation obviously breaks the trust of the system, but few organizations can afford a global CDN big enough for their needs. Thus we should include
the support for CDN networks directly in this protocol.

The idea is to let the application specify which content can be cached and for how long, and then the client will issue the request for a cached resource in
clear text over the network, using the above reserved connection id.
\begin{itemize}
Luker's avatar
Luker committed
	\item If no CDN is found between the client and the server, the packet will get to the server, which will process it.
	\item If the CDN receives the packet it will give the encrypted content to the client. The client will get the decryption key directly to the server.
Luker's avatar
Luker committed
\end{itemize}

Further optimizations might be communicating the client the CDN closer to him, in which case a full connection might be established between the CDN server
and the client.

This means that caching proxies could be both transparent and explicit.

In the case of transparent caching however only public information should be cached to prevent tracking of what resources the user is currently visiting.\\

This is a very interesting idea, but the risk of user tracking is great, so it should be handled with care and needs more analysis.


Luker's avatar
Luker committed
\subsection{Client to Client communications}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
Instead of letting the developer handle firewalls and complex stun protocols, Fenrir should include a way to let two users communicate directly.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The right way would be to copy the stun protocol inner workings, so that Fenrir on UDP can quickly pass NATs. But two problems arise and remain unresolved
as of now:

\begin{itemize}
\item \textbf{device identification}: The user can be logged in from multiple devices. Just trying to connect to a username therefore does not make much sense.
	For this the usernames should include some form of device identification, for example like in Jabber (\textit{user@domain.com/device}) or like the
	more common email subaddressing (\textit{user+device@domain.com})
\item \textbf{Spam problems} letting everyone connect to everyone will increase the amount of spam that has to be managed by any application by a wide
	margin, making first communications problematic (if the users already know each other, they could filter requests from non-registered users).
\end{itemize}
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The device identification is a matter of decision, and probably the Jabber way is better, as the email solution requires the provider to standardize to the same
symbol `+' for subaddressing.
Luker's avatar
Luker committed

Luker's avatar
Luker committed
The spam problem instead is a big one, as limiting the application to registered users will also limit the possible uses of the application itself
(e.g.: no mail without registering each other in some way).
Luker's avatar
Luker committed
The spam problems are a show stopper for this feature, as it would be too easy to abuse.