We’re wanting to move our pod to a different server for a number of reasons (get it out of the US, get rid of anything bitnami related, and move from centos to ubuntu). I think it should be pretty simple to do but wanted to check with someone with more diaspora experience than me
I was planning on getting a new installation of diaspora up and running on the new server and testing that first to make sure it works. What I’m not sure about are the ssl certs. I’d like to test that they work too. I have another domain I can use to test and can get new certs for that from let’s encrypt but once I’m ready to move the pod can I just copy over the existing certs from the pod? Also, I’d like to do a test run of copying over the pod db and making sure that works but am worried I’ll break something by doing so, especially since I won’t update the dns until I make the actual move. Oh, and I was thinking of using rsync to copy over the uploads and keep them updated until the move unless someone has a better suggestion for that. We have over 25gb there and that’ll take a while to copy.
Rsync to copy data looks like a good idea to me do you want to switch from mysql to pgsql in the same time? I guess the easiest way to migrate the db will be a dump so it’s look like the good occasion to do so.
You can test the new installation, but keep in mind, the domain is coupled to the database, so don’t run your existing domain on a empty database (don’t create any users) and don’t run your existing database with a new domain (for the domain, the url in the diaspora.yml is relevant).
So my suggestion is (what I did every time I moved my pod to a new server), copy everything to the new server:
config/database.yml (or create that new with the new database config)
certs from the old server
Setup a new pod with the guide in the wiki, but skip the db:create db:migrate part use your existing database instead, then start everything and test if it runs. You can access the new server by adding an entry to your local /etc/hosts with your domain and the new IP address. You should be able to login (or already be logged in, because you copied the secret_token.rb), and see all posts.
Don’t let your new pod running, because if sidekiq is running at 0:00 UTC it will send birthday notification mails, and your users will receive them twice, from old and new pod. So just stop your new pod, until you do the final move, then shut down your old pod, do a final sync of database and uploads, start new pod, change DNS (lower your TTL before the move), and everything should be fine
Ok, I’m moving from apache to nginx. Where do I put the certs for nginx? Also, I saw the acme.sh script. Since I didn’t use that to create the certs, is it possible to use that to automatically renew them?
I finished getting everything setup but when I try to test I just get the generic welcome to nginx screen (without ssl so something wrong with certs too). I setup my nginx.conf like shown here https://gist.github.com/jhass/1355430 but I’m guessing I missed something.
I moved the other weekend, largely with help in this thread. so things I did/did wrong, but it all worked out in the end
I opted to do everything in one go. It seemed easier/more straight-forward than trying to do interim testing and syncing multiple times. Total downtime was 1-1.5 hours, most of which was waiting for DNS to flush.
Straight LAMP (I’m going to be running a few Wordpress instances on the same server), and pulled new Letsencrypt/certbot certs. Configured to run as a service from the get-go
I used a sshfs mount to do my file transfers. I mounted newserver in an oldserver directory, and in retrospect I don’t know if the other way 'round would have been better. ** Server <-> server stuff isn’t a strong point for me other than backup schema.
^^this is where I encountered an oddity that I thought I’d mention. I rysnc’d everything over (which didn’t work, in that the attributes weren’t transferred), and when I started the service none of the photo links worked. Changing permissions to 755 even.
What it came down to is something I’ve never heard of. As far as I can tell, there was some sort of UUID anomaly ~ the files on the new server said diaspora:diaspora, but apparently not. The fix was a random me doing a chown -R to a different user:group and back.