Our ideas differ, so let’s analyze the problem.
Signal is a very nice piece of software designed for your mobile phone that encrypts messages and calls. I use it and although I said I do not share their views, I am sure I’ll keep using it.
Signal infrastructure is completely centralized. The software per-se is open source, though.
The main point of their infrastructure seems to be: “we don’t want people to build half-hassed clients/servers, so we keep everything by ourselves”.
“The ecosystem is moving”, as they say, and the only way to keep up is to break things now and then, and develop clients and servers by ourselves, as that is the only way to have a feature-complete client.
That surely is the easiest way out, but let’s look at a couple of examples that they make.
An easy example of a federated environment is always the email system.
One potential benefit of federation is the ability to choose what provider gets access to your meta-data. However, as someone who self-hosts my email, that has never felt particularly relevant, given that every email I send or receive seems to have gmail on the other end of it anyway. Federated services always seem to coalesce around a provider that the bulk of people use
All of this is true. But to be fair, I have rarely seen a system as complex as the email system.
When setting up an email system you need to configure postfix, dovecot, amavis, spamassassin, spf, dkim, dmarc and more, while mailing lists are a horror all of their own kind.
Very few people can do that. The email system is probably the most tortured set of protocols ever made. It’s only natural that you might want to use the service provided by someone else. You will search for someone you think is trustworthy, and big companies tend to give this idea.
Still, almost all companies have dedicated mail servers, which means that the federation still makes sense. Federation, by itself, is meant to separate the domains of interest. For a company using a completely centralized system like Signal might not be the best of ideas.
Federation is also easier to scale up. Signal can handle being a non-federated system due to the low amount of data a single message requires. Trying to manage the email system like signal would be outright impossible, even if everyone suddenly agreed to it.
Extreme complexity: bad. Really bad.
But lots of people still use dedicated systems when they have to manage sensitive stuff. Centralized systems can’t be trusted that much.
Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s n open source infrastructure for a centralized network now provides almost the same level of control as federated protocols, without giving up the ability to adapt. If a centralized provider with an open source infrastructure ever makes horrible changes, those that disagree have the software they need to run their own alternative instead. It may not be as beautiful as federation, but at this point it seems that it will have to do. XMPP is an example of a federated protocol that advertises itself as a “living standard.” Despite its capacity for protocol “extensions,” however, it’s undeniable that XMPP still largely resembles a synchronous protocol with limited support for rich media, which can’t realistically be deployed on mobile devices. If XMPP is so extensible, why haven’t those extensions quickly brought it up to speed with the modern world?
Wait a sec, “If a centralized provider with an open source infrastructure ever makes horrible changes, those that disagree have the software they need to run their own alternative instead”
Uh.. Not really. What about contact lists? My friends would not see
me on Signal anymore.
What about the non-interoperability between services ‘cause, you know, it not a federated environment?
Now I have to convince everyone to use my version of signal, too.
This scenario only works if everyone suddenly jumps ship together. Not gonna happen.
I have tried to manage XMPP servers a couple of times. They are not as bad as the email system. It’s just that XMPP was extremely badly designed.
Server setup is easy (ejabberd/openfire). Client setup is (almost) easy. The problem is exactly as reported: the protocol itself is bad. So bad that there is no open source client that supports well anything non-text oriented (yes, not even pidgin).
XMPP is all XML. Ever tried reading raw XML? Or worse, parsing it? What about encoding binary data in a text-oriented protocol? Of course it’s bad. XML is slow, hard to parse.
XMPP is designed badly. It gives the feeling of a protocol without any direction
that was coerced into adding multiple subprotocols that were designed explicitly to
go around the limitations of an initial, incomplete specification.
You need to add dozens of new sub-domains to manage an XMPP server.
By default connections are slowed down to a crawl to work around spam issues.
I mean, a chat protocol that does not handle group chat, contacts, discovery, direct client communications, offline messages, tls integration, or any kind of file transfer? It makes you wonder what it was like at the beginning.
The last big problem with XMPP is that there is no standard reference server and client. The xmpp.org guys do not provide any of that. They only reference other clients. This is a huge mistake as it makes it a lot easier to lose focus on the interactions of the various extensions, and their implementation.
Both Google and Facebook manage email systems(very complex systems). Both Google and Facebook had to give up on supporting XMPP. Somehow, I don’t think the problem was “no client compatibility”, or that they did not have the resources to develop a new all-extensions-supported client. Multi-billion companies tried to support this, and failed. Yet still support other federated systems, like emails, without a fuss. Email is difficult to set up. XMPP is just bad.
XML sucks. XMPP sucks. Badly designed protocols suck.
I designed/implemented a distributed chat on top of the XMPP protocol. Like me, anyone who tried implementing it realizes how bad this protocol is.
Up until now, extension has never been the problem. Protocols were badly designed, or extremely difficult to set up.
Open source APIs and libraries for everyone!
Very good solution, but… How does this have anything to do with the architecture behind it?
The idea is that this actually gives them the ability to break things in the future. Drop APIs. Create new APIs.
Basically they are the sole member of the committee for a protocol. They also are the only one with all the metadata, and if things were not encrypted, they’d have the data, too.
We need good protocols. We need a way to quickly add and drop features out of the protocol, fine. But protocols like SSL/TLS already do this, and have been (slowly) updated for many years.
Giving complete control over the metadata/data to a single entity is not something acceptable in all environments.
What we need is to define entities that will guard where a
protocol is headed and how, so that it’s easy to track changes.
The XMPP extension management was getting near this, but the
protocol itself was a mess, and there was maybe too little
collaboration with the various proposals.
Once you have that, let the slow clients app die. If they were not updating anyway, they likely would have caused problems with your APIs breakage anyway.
If you want to be sure others will follow your protocols, just keep providing up-to-date server, client and API code. If you provide server and clients (which you have to do anyway in the centralized model), you’ll be the one who controls the development of the protocol anyway.
“I want to break APIs” does not sound like a good excuse for dissing out an entire architecture. Just use versioning in your APIs by design.