Raspberry PI support

Hi,

I’ve build an arm Docker image for Diaspora* and a guide to install it on a plain raspberry PI.
The final setup use treafik to manage the SSL certificates (from Let’s encrypt). While the 1st startup is reallllllyyyy long (over 10mn) next startup are fast, and diaspora* itself is realy usable and it should be able to handle a few dozen users easyly.

Feedback more than welcome

2 Likes

While a raspi is able to run diaspora, I wouldn’t recommend much more than a single user (maybe a friend or two). While in the beginning everything looks fast, you need to remember that the pod is now still empty, over time it receives a lot of posts from other pods … and if the database runs in the same raspi, this can really become a problem. It probably still works for a single user later (just slower), but it will not be able to “handle a few dozen users easyly” after the database grew a bit and users started to follow more people and tags (which also adds complexity to the stream queries).

Of course you can start with a raspi and migrate the database to something bigger once it becomes a problem, but you should think about that before you add too many users to it when you want to keep everything on a raspi.

I got a little surprised by your reaction. So I went to benchmark and compare database performance on my laptop and my rPI3. Since my Diaspora* setup is using postgres, I used this to compare :

docker run --name postgres -e POSTGRES_PASSWORD=password -e POSTGRES_USER=test -e POSTGRES_DB=test -p 5432:5432 -d postgres:alpine
docker exec -it postgres pgbench -U test -h <hostip> -i --foreign-keys -s 100 test
docker exec -it postgres pgbench -U test -h <hostip> -T 900 -c 50 test

The first line here, create a postgreSQL database, the second create a benchmark database with the largest table having 10M lines. The last line is a bench run for 15mn (900s) using 50 clients.
I choosed theses values for theses reasons :

  • 50 clients sound like something along the lines of “a few dozen users”
  • 10M is an estimate of the numbers of posts 50 users would see over the course of 5years
  • 15mn is a good benchmark duration

Here’s the results I got today :

On my laptop (i5-8250U with 32GB ram and the database is stored on a middle range SSD) :

transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 50
number of threads: 1
duration: 900 s
number of transactions actually processed: 1052054
latency average = 42.846 ms
tps = 1166.981915 (including connections establishing)
tps = 1166.988581 (excluding connections establishing)

On my rPI3 :

transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 50
number of threads: 1
duration: 900 s
number of transactions actually processed: 156405
latency average = 287.786 ms
tps = 173.740197 (including connections establishing)
tps = 173.742105 (excluding connections establishing)

Analyzing the run on the rPI showed 2 things :

  • Since I was using the docker bridging, 1 got a 10% CPU hit by this. Changing the bench command line to -h localhost would have solved this, but since I’m trying to test the limit I prefered to have that just like I would have had it if the database was running on a second PI.
  • I’m using a very very crappy SD-card. But that was expected. Yet, during a run, IO wait represent 50% of the load. So if I ever wanted more perf, I would start by spending some money of a very fast SDcard.

Yet, the Pi showed itself only 6times slower than my current gen laptop. Are you saying to me that I would only be able to host 6 users on my laptop ? If that’s true, what kind of hardware enable the pods having over 20k users ?
And beside, these 173 transactions per second, would still allow my 50 users to click 3 times per second each. Sound still fairly manageable.

I’m willing to go further in my benchmarking, but then, would you please recommand some better assumtion like:

  • number of posts in a 5yo database for 50 users
  • number of database transactions a user generate per second (on average)
    That would be very usefull to be able to more estimate the number of users a rPi can handle.
1 Like

It’s probably not that easy. The stream query is a bit more complex than just a table with 10M lines. It’s multiple queries with multiple tables joined together … and it gets more complex the more contacts and followed tags the people have … or hidden posts (we try to improve the queries, but it’s not that east). And it gets worse when the database is bigger than the available RAM for caching (and the raspi doesn’t have a lot of it, and it needs to be shared with the diaspora processes … and diaspora also needs cpu-time to build the stream).

I don’t know where exactly the user-limit/post-limit for a raspi is, it has a lot of other factors … and there also are a lot of factors for how many posts and other entries and queries you have after 5 years, it heavily depends on the usage … but I just wouldn’t recommend it (for long term) to host more than a few friends on it (also sd-cards aren’t the most reliable storage medium).

Most pods with 20k users have two servers, one for the application and one for the database. On framasphere.org, both have 16GB of Ram and are running on a SSD. Perf are not awesome but okay. It’s like that now that we switched to postgresql 11. Before that, it was really slow (but framasphere has thousands of users).

If you want to experiment and test please go ahead, there is no problem with that :slight_smile: In fact, you seem a power user so if you’re willing to try hosting a pod on a raspberry pi 3 and give us feedback, that could be very interesting. But if real world performances turn out bad, switch to a real server. That’s not really hard to do.

Are you talking about this query ?
Then it doesnt look that complicated to me, and from what I’ve readed (in that threads and others) it’s pretty much the bottleneck of the whole code and it looks that it could be optimized.
The query seems to have changed a little bit in 0.7.12.0 at least as far as I can with my pg_stat_statements-fu. The new query looks a bit better, but a few things still bugs me : why using a distinct when the set is from a single table and one of the selected column is the PK or why using a left outter join but then force it as a regular join by using the right fields in the where clause…
I’m guessing that once a good postgres DBA have a look at a large Diaspora installation, the issue will be over.

You haven’t seen how I’ve setup the installation. It’s installed with swarm enabled. This make’s it trivial to move the database container to an other node (be it a PI or otherwise), or scale the number of nodes Diaspora is running on. Beside newer Pi have more RAM ; 4GB PIs will ship in a few

I dont get why you’re so negative about running Diaspora* on rPI.
Right, it will never be able to be a very large pod. That’s for sure. But that’s not the plan. Everyone has a geek in his relation. I should say “his”/“her” geek :smiley: And that geek have 10-20 “regulars joes” to support. That geek could probably host a pod on a PI to handle the Diaspora usage his “regular joes” would need. That’s way closer to the plan.
Framasoft (the guys behind framasphere and many other web services) have annonced they plan on reducing their footprint. In that post, they explain that using an other central plateform (frama*) to get away from the GAFA is not the right option. The right option is self-hosting. My only reason to provide this is to offer a simple to setup and cheap solution to self-host Diaspora*
I dont even plan to use this as-is. My pod will migrate in the next days in a kubernetes cluster of (way stronger than a PI) arm64 boards. But since there’s no arm64 docker image of diaspora I build one. And since I have a rPi laying around, I took the 2h to set it back up and running then the 2h to install docker and build the image (yup that docker build broke the 1h time limit of my CI/CD :D) and finally a few more hours to write the ansibles roles to provide an easy setup

At least a very good point here :slight_smile:
I plan on writing backup and restoration script to work around this.
And for my own pod, the database is going to be running from nvme disk :wink:

I guessed that large installation are spreaded over a few servers, especially since horizontal scaling of diaspora seems so trivial.
If you’re a member of the framasoft gang, then allow me to salut you (bonjour :P) and see my previous reply : you’re pretty much the reason why I made this mini-projet :stuck_out_tongue:

As you can see in my previous reply, I wont host my pod on a PI for long, beside I will never have many users. 4 and probably only me as active anyway. If the /stream ever become a problem for me, I wont have any issues using Brad Koehn solution but I realy doubt it will happen.
And no, I wont go to “real server”. a PI use less than a tenth (or even less) of power compared to my laptop and yet be only 1/6 of it when performance is the question. So the perf/watt ratio is still way better for the PI (not even factoring the perf/TCO ratio). See you’re telling me that you’re handling 20k users with 32G of RAM that’s a 625 users/GB. I dont see why my 50 users per rPI seems so off the charts.
I mean, seriously, sure a PI is not a data-center node (I’ve seen my fair share of racks fully loaded with servers with 384GB ram), far from it. But it’s not an arduino either. it’s a more than capable little machine.
Beside, imho the /stream issue is something of the past, or at least something that’s going to be fixed soon enough for small scale pod created now on.

I’m not skilled enough to give you technical feedback about your docker images. Thank you for building them, I’m sure they can be useful. Please try hosting the pod for a few weeks / months, sharing with users from other pods (I’m fla@diaspora-fr.org btw) and then give us feedback here, I’m really curious to know if you find it usable or not (be sure to use pg11 or newer).

Well if you want to work on that, it would be very cool. Once you have enough real world data, you can make tests and tell us what you find :slight_smile:

Then you’re responsible of most of my posts’ table lines :smiley: (since I used you as a test that federation does indeed works)

Since the official docker image have switched to pg12, that’s what I’m using.

It will take years before my pod have enough posts to make a good analyse, and even there with only me as users, it wouldnt be realy “real-world”. So I wont be able to work on that issue. Beside, i’m not a skilled postgres DBA. I just used to be a DBA, but for an other rdbms.

I dont know where to document this, so I’ll post it here.
Since you’ve been saying me that that rPI wont be able to handle the database growth. I’ve wanted to simulate a very fast growth.

So using this DB query I’ve been able to find people I dont already “follow” :

select diaspora_handle 
  from people
 where id not in (select person_id from contacts);

This way I’ve made my posts count grow. Using these command on the rPI, I can follow this growth :

Stack=diaspora;Svc=postgres;
Cont=$(docker stack ps ${Stack} -f name=${Stack}_${Svc}.1 --no-trunc|awk -v N=${Stack}_${Svc}.1 '$2==N{print $1}')
echo 'select count(*) from posts;'|docker exec -i ${Stack}_${Svc}.1.$Cont su - postgres -c 'psql -U diaspora -d diaspora_production -t'

Currently I’m at 1093… Far from huge database :smiley: (17 MB)

Is there a way to automate my syndication to discovered users ?

There is an “autofollow back” feature in the setting, when someone starts sharing with you, you automatically share back. I guess you can tweak the code to share with everyone your pod discover starting from there.

After 22days running, the largest issues still is “/receive/public” (notice friendica use “/receive/public/”) and not because of the database query that’s behind, but because the SSL encryption.
I get a a query on this every 8 seconds on average. Encrypting this generate a load a little over 0.6 by itself which mean, a heatsink or even a cpu-fan is requiered.
Is there a way to reduce the sync queries from the other pods ?
BTW, I’m currently at 2604 contacts which is over what I expect 50 users would have together, and over 32k posts. So far postgres is clearly unimpressed.

Besides having less contacts, the answer is: no.

Ok, so, you were right.
I’ve reach a level where the memory constraints and thermal contraints are too pushy for this little rPI3. I could add some swap space to fix the issue, but that would be painfull performance wise.

Yet I believe a pod on a rPI4 with 4G of memory is clearly doable. That would require :

  • an optimized traefik build for it using the cryptographic features of the rPI4 cpu for ssl encryption.
  • an heatsink with probably a fan

I’ve currently over 60k posts in database, and I as expected, so far, it hasnt made a dent to D* performance. I’ve to say kuddo to the diaspora dev team, because the diaspora* code is cleverly optimized.

I’m still commited to have a look at the database performances, but my hardware is going to need some upgrade. My total order is 81,21€ (an rPI4 with 4G of RAM, a power supply, heatsink, a case, a fan and a micro-HDMI to HDMI cable). That’s pretty much the price for the smallest server at OVH for a single month.
That plan is still a bargain :wink:

I’ll have my optimized traefik and Diaspora* for arm64 build before I receive my order.

I’ll keep you posted :wink:

What would be interesting would be combining Docker Swarm with a group of Raspberry Pi 4’s. You could use nginx as the load balancer. https://medium.com/@simone.dicicco/building-a-raspberry-pi-cluster-with-docker-8d53ee614479

That’s pretty much the strategy here. It’s using traefik as load-balancer instead of nginx, but it’s all the same. The only missing piece is that I dont set up storage for this. (I tested nginx, and since there was no performances gain, I reverted to traefik that I find easier to maintain)

I’ve been trying to get this to work on my RPi3 and have even created a pull request from my fork to fix one issue. Now the Ansible playbook finishes successfully but now I don’t know what to do next to test it. I’ve tried hitting my RPi3 IP address, and my outside ddns I’ve port forwarded but I keep getting 404 page not found. I’ve tried /public, /stream, but nothing is ever found.

We don’t publish an Ansible role, so I’m not sure what your setup looks like.

Either way, you need to run ./script/server to start the diaspora* appserver. If you’re running a development pod, it’ll just listen to port 3000 and you can use that. If you’re using in production mode, you need a reverse proxy to serve public assets, and to serve the appserver. Here’s an nginx example config to do that.

I figured out my problem. It was because my home router didn’t support NAT Hairpinning or Loopback. It worked as soon as I accessed it from outside my network from my phone cellular network. I also found an issue in the playbook from that awesome project Sebastien created and already got it merged in. I want to say having a simple script someone can just run on an RPi then do some simple port forwarding on their router and set up DNS or Dynamic DNS to point to their router (most routers have a DDNS setting built right in!) I’m now running my own diaspora node from a raspberry pi 3 B sitting on my desk in my office at home. I don’t have to pay a dime to anyone for hosting or even for a domain name or SSL cert. So very awesome and cool and will make setting up diaspora pods easier!