The lack of public post federation in Diaspora is IMHO a make or break feature. The whole network is a little broken as small pods are cut of most of the posts on the network due to the way current federation works.
Here is my proposal for solving this issue, please see wiki post here.
It is not a comprehensive solution that can just be implemented now. It is a high level suggestion for going forward with talking about such a feature.
Note: This discussion was imported from Loomio. Click here to view the original discussion.
@loelousertranslato having many would mean that if a relay server is down, the posts still federate. After all these would be user hosted services, not commercial grady with 99,99% uptime guarantee.
If a pod cannot contact a relay it would just use another.
Also I think pods could easily check the origin of the post easily just by asking the originating pod for the hash and compare it to the one coming from the relay - to stop relays generating posts. Will add this to the proposal.
If I understand it well the central hub server has to be on but not all the time : if it shut down for a few hours it would just delay the new tags and pods that are taken into account in the list… but the federation and sending of posts would still be ok because of the relay servers.
Couldn’t the central hub hosting the pod list with tags be in the pods themselves. With keeping track of the latest version of the list and pushing it to all the connected pods (connected because users would have added a user/seed from them in an aspect) if their list is older and if the admin has authorized the kind of federation…
This way all pods participating to this should with high probability (?) and rapidly be interconnected.
This way the list of pods and followed tags should be repeated in each pods so that if some pods are shut down the system would still work fine…
Well IMHO we need to have some kind of central hub at some point. We already have a central hub called diasporafoundation.org for the project. We also have pod lists like podupti.me. Having an official central hub that would receive data from the pods themselves would have lots of benefits. Syncing everything everywhere is problematic. For example some podmin sets up a new pod - where would that pod send the subscription list to? All the pods? How, where would it get even one pod address?
The central hub could also be used to finally gather statistics from the network (opt in of course). For example pods could report their amount of local users and post counts and these statistics could be reported on the hub to see where the network is going.
I guess that a new pod would have to connect to any big pods around to be part of the public post federation. But well yes it has to have the info of where to find another pods first (podupti, web search engine…).
having the list in pods with close to close spread of last version of the list is not exclusive to having a central hub that can be implemented first.
-So the order for any pods would be : 1- use the hub if it is on. 2. If not, use its own list. 3. get the list from the hub if newer list in the hub. 4. send the list to any pods and the hub if its list is newer.
Well, I write “newer list” but I am not sure that a “newer list” is more complete. Not sure how to track for the most accurate and up to date list is. I am not familiar with syncing and all…
The central hub part feels just wrong, I need to think more about that part.
For the relay part, we can save the hash checking blabla, by not touching the Diaspora protocol message at all. It is signed by the author, thus there’s no way for a relay to spam messages to the network, for the same reason it isn’t possible right now. So we could just send messages like {"hashtags" :["#a", "#b", "c"], "diaspora_message": "xml in base64 blob ready to post to /receive/public"} to the relay server.
@jonnehass yeah I have no real grasp of the D* protocol so I didn’t mention too much about the specifics in the proposal, just the idea of the relay.
To me the central hub makes sense for a reliable source of network data. I mean we don’t decentralize the project page and the wiki either - the central hub isn’t any different from those resources.
What advantages does this method have over the scheme I proposed in this thread (it talks about tag federation about half-way down).
I’m not a coder at all, so this is just a concept rather than anything more detailed, but I hope it might help to improve D*'s federation, especially for new and one-user pods.
@goob My concern with that approach is that it’s very inefficient (it looks like O(n^2) for you computer science types where n is the number of pods; the amount of network traffic grows rapidly as the number of pods increases), and it implies that all pods are equally trustworthy sources of information about the podsphere.
Also it seems to me that your proposed solution would require a lot of new code to be developed, alongside new messaging semantics. This is based on a very quick analysis so I could be way off base here.
I’m trying for an incremental model that scales well with minimal new coding required. Also in my proposal tagged posts are federated in a very efficient model that scales linearly (O(n + m) where n is the number of posts and m is the number of pods).
@Goob no damnit at all! I wouldn’t have thought of using the pod to help locate other users or index other posts were it not for your proposal. There’s probably a better idea out there than the one I proposed too; I just hope I can contribute something.