That’s because those files are generated by diaspora*, placed inside the public
directory, and those are not served by diaspora*, but instead have to be served by the reverse proxy - our supported configs to precisely that. You have to find a way to share those assets between the app server and whatever servers the frontend. This is important anyway, as user uploads are stored in the same directory.
We provide example configs for Apache and Nginx in our installation guides. If you’re familiar with Traefik, I’m sure you can adopt those.
With all due respect, I don’t think you know what you are talking about. Pretty much all server applications have manual steps to perform during major version upgrades at some point, regardless of how amazing your architecture is. This could be because you remove legacy modules from your code, you’re doing a large refactor that needs manual adjustments from people running the service (for example in config files), or just running some long-running migration that can’t be done in the usual startup loop because it could take a significant amount of time.
All large projects have these things, and quite frequently, you can’t automate these things away because using crystal balls to make decisions is quite unreliable. Some projects build migration containers, others spend a lot of time building interactive upgrade scripts, but it’s never as simple as just pointing your compose file to :latest
and running docker-compose pull && docker-compose up
. That’s not how it’s meant to be, and these are not “architectural issues”. I understand that someone who joined this project apparently two days ago doesn’t have quite the insights to make accurate judgments about how the project and the architecture looks like - but I see no reason why anybody should just jump into a project’s discussions and immediately claim there is some kind of bad architecture present. That’s kinda annoying, to be honest.
The goal of building a decentralized social network is to enable people to have more control over their nodes and to reduce the amount of single-points-of-failure in a network. Imagine if a third of all diaspora* nodes was hosted on AWS, and imagine Amazon suddenly having a large-scale outage (happened before, will happen again). A third of the network would suddenly be down. Decentralization means … moving away from centralized infrastructure pieces. AWS is quite the opposite.
If you were a broke college student, you hopefully wouldn’t run things on AWS, because as a broke college student, you wouldn’t enjoy burning money. Instead, you would buy a cheap VPS somewhere, which wouldn’t be as reliable as AWS, but probably an order of magnitude cheaper. 
That’s not a hypothesis, but fact: the project team runs zero diaspora* nodes for anyone but the official team account and we have no intentions of doing so.
Actually, yes, we do. For any piece of software we ask the podmins to install, we make sure that our guides result in an environment that isn’t worse then previously. For database servers, we make sure to point to documentation that correctly explains how to set those up so that they’re not exposed without proper protection. We make sure that the Ruby version is updated. We make sure that our distribution doesn’t contain any security vulnerabilities. We make sure that the configs we provide (including the nginx config for example) match best-practices, for example, by disabling insecure TLS ciphers. Our default production setup is only listening to a local Unix domain socket, and not a public port, and the only way to reach the service is via a reverse proxy that we provide a well-known configuration for. If you follow our installation guides, we can be reasonably sure that you don’t open holes in your system by doing what we say. You’re right that we can’t help people with setting up the server itself and things like adequately designed SSH authentication, fail2ban, firewalls if needed, … but we can make sure that the things we ask podmins to do don’t make things worse. Quite frankly, I think we should, because otherwise, only people with a perfect understanding of Ruby on Rails applications could be allowed to run a diaspora* pod.
There is a high level of trust that podmins have to place in our hands - especially people who may only be used to running PHP-based applications for example. Trust is something you don’t play with if you want to be taken seriously.
Docker, on the other hand, would be a different beast. We’d have to tell people to install Docker, and there is no short “how to make your Docker setup not suck” documentation anywhere. We’re also not going into creating those documents ourselves, because that’s more effort (both to initially create and to maintain) than we can justify.
We’ve outlined the issues we have with a Docker-based production setup. So far, we have received a small number of people who’re actually interested in this (the majority of podmins are very fine with our current setup, by the way), and an even smaller number of people who are very vocal about how we “need” to add Docker containers. Those vocal people raise one very valid point: making the setup very easy for people - but nobody has yet addressed a single point on our list with more than a “just look away, that’s just a small issue”. I find that quite concerning.
That’s a slightly weird claim, given that we already went through the trouble of building a Docker-based development setup. The development setup is, unarguably, significantly more complex to build than a production-setup would be, as we not only have to set up all required components, but we also have to make sure that developers can edit the code outside the container and make sure things like live code reloading works. Our already existing Dockerfile for diaspora, and the compose file would work just fine in a production environment, and are technically ready except for a reverse proxy - we’d just have to strip out a couple of things that we put in place to make development setups less painful. Like the code sharing between host and container, and the 550 lines of cross-platform bash script that makes it easier to run things.
It would also be significantly easier just to build a production setup based on our current configs to shut everyone up instead of spending hours on end discussion these things. That would get us some love from people like you and a nice “look how easy it is to set up diaspora*” blog post. Believe it or not, there are actually very good reasons of why we don’t take the easy way.
Again, that’s simply not true. We set up RVM to resolve exactly that - we are in control over the Ruby version and the Gemset in use, and the whole system is designed not to conflict with whatever else is running on your server. The only thing that could possibly conflict is if you’re running another application that uses Redis - but we added a very explicit note about that to the installation guides and we provide an easy-to-access way to change the Redis database in our config file. Even our nginx and apache config examples are designed not to conflict with other things running on the server.
In my earlier post, I made a very open invitation to @danielgblanco. They claimed to be interested in looking into these things, and I invited them to let us know if they figured out solutions to the issues we have.
This is an invitation to everyone. If you think you have ideas on how we can provide people with a setup that results in the same maintainability, upgradeability, and doesn’t open new potential security issues, I can guarantee you that we’ll be more than happy to work with you on getting this shipped to production. What absolutely doesn’t help, tho, is another “you need to support Docker because Docker is cool” kind of post. We’ve had enough of those.