Skip to content
Conclusions.tex 5.07 KiB
Newer Older
Luker's avatar
Luker committed
% !TeX root = Fenrir.tex

\part{Conclusions}[extrapolating.png][Well, the data proves it.]

Luker's avatar
Luker committed
\chapter{Conclusion and future works}

\section{Future Works}
\label{future}
\begin{itemize}
\item move normal token to OTP. Benefit: identify token leakages to third parties, as they would be forced to ``consume'' the OTP. limiting OTP to 1 hash
would force verification of user device, but might create problems due to connection errors and/or not-yet-synched data on the fs.
\item multicast. move connection id to 64 bits and reserve half for multicast?
\item synchronization between AS to drop active connections in case of device loss? (and block undo?)
\item cryptoverif. do we have time?
\item proverif: broken handshake is included, but not broken protocol version? Although it can be considered a handshake due to how Fenrir works. => Explain
\item granting auth to 3rd party? like: authenticate on a device without keyboard. (??)
\end{itemize}


\section{Connection id}

This field might be moved to a 64 bit field to increase the number of possible connections, include reliable multicast connection tracking and include caching
proxy support.

A connection id of 32 bits means that we can handle more than 4 billion connections. Each connection can be quite complex with multiple streams and
not-yet acknowledged data, so just tracking them all would require hundreds of gigabytes. The worst case is having all $2^{32}$ connections transmitting at
full capacity (116TB/s), and we would need to track all that data, plus stream information... which is way beyond current computational capabilities.

So increasing the connection id counter to 64 bits sounds both extreme and unneeded, but we should consider multicast: The increased connection id space might
mean that the servers can choose a random connection id without much probability of clashing with an other server multicast connection id.

\section{Caching Proxy}

The maximum connection id (binary id: $1...1$) should be reserved to include caching features to the protocol.

A big problem with HTTPS right now is the inability to cache encrypted content. As the content is encrypted on the server side, the current CDN (Content
Delivery Networks) require the server to either give up the private keys for key exchange, or to include a protocol to provide the key generated
between the client and server to their network. While this second choice still protects the private keys, it is still basically just a volunteered MITM attack,
and the CDN still has complete control over the client and server traffic.

This situation obviously breaks the trust of the system, but few organizations can afford a global CDN big enough for their needs. Thus we should include
the support for CDN networks directly in this protocol.

The idea is to let the application specify which content can be cached and for how long, and then the client will issue the request for a cached resource in
clear text over the network, using the above reserved connection id.
\begin{itemize}
\item If no CDN is found between the client and the server, the packet will get to the server, which will process it.
\item If the CDN receives the packet it will give the encrypted content to the client. The client will get the decryption key directly to the server.
\end{itemize}

Further optimizations might be communicating the client the CDN closer to him, in which case a full connection might be established between the CDN server
and the client.

This means that caching proxies could be both transparent and explicit.

In the case of transparent caching however only public information should be cached to prevent tracking of what resources the user is currently visiting.\\

This is a very interesting idea, but the risk of user tracking is great, so it should be handled with care and needs more analysis.


\section{Multicast}

We have done nothing to support multicast until now, but since Fenrir supports unreliable communications, adding multicast delivery should not be too difficult.

The problem with multicast packets is that a single packet has to be delivered unmodified to multiple destinations. This means that connection id, encryption
keys and other features have to be the same for all recipients.

The big problem is having a consistent connection id. As the connection ids are usually decided by the receiving party, we need a way to avoid clashing
when multiple clients have multiple multicast connections with multiple servers. Malicious intent on the part of clients should be considered, too, and if possible
even malicious servers.

Designing the system so that multicast connection ids do not clash, without any kind of client synchronization seems impossible, so we should limit ourselves
to lowering the probability of a clash. Malicious server disrupting other servers' multicast connections is a problem, but we should not relay on client
synchronization to change the multicast connection id, since a malicious client is much more probable..

A solution would be to tie the connection id to the ip address when multicast addresses are used. However this would break our independence from the
lower layers.