Installing diaspora* 0.7.7.0 on SmartOS/Illumos/Solaris

Hello.

Background:

I have finally decided to bring up a new pod on my own infrastrucure, which happens to be SmartOS. SmartOS is based on Illumos, which in turn is forked from OpenSolaris, an open-source version of Solaris that formed the basis for Solaris 11 (Oracle closed the doors on the OpenSolaris project after acquiring Sun).

I followed the installation instructions for Centos 6 (since it seemed fairly generic), and apart from needing to patch sigar.c in the kostya-sigar gem (added && !defined(__sun) ), everything worked fine. Database is set up, assets precompiled. This is a production installation using MySQL, and the idea is to have diaspora* running behind a proxy (the proxy uses HTTPS on the outside, plain HTTP on the inside).

Note: I have no prior experience of working with Ruby, but I have worked in the Unix/Linux environment for many years (so I am very familiar with the platform).

Problem description:

When I try to start diaspora* with “script/server”, it enters what I would call a crash loop, and never enters a running state properly. It is sometimes possible to get the diaspora* start page to load in the browser, minus any assets (images, JS, styesheets), which never load.

Installed versions:

  • diaspora* source version 0.7.7.0
  • ruby 2.4.5p335 (2018-10-18 revision 65137) [x86_64-solaris2.11]
  • curl 7.62.0 (with AsyncDNS)
  • rvm 1.29.4
  • redis-3.2.9nb1 (configured and running)
  • ImageMagick6-6.9.8.10
  • nodejs-8.1.2

Important part of the log output (this repeats forever):
https://pastebin.com/CgkiQhgy

Configuration
database.yml

diaspora.yml

Any help and/or pointers welcome.

Regards,

--lgt
1 Like

Uh, although this is a very unsupported environment, kudos for the nice report with all the information we could ask for! :cookie:

Well, almost. There is a logfile log/production.log, which should include more infomration about what is crashing and why it’s crashing. Mind having a look at that one?

1 Like

Hello,
Thank you for your answer, and yes I am fully aware that this is not a supported platform, but I thought “how hard can it be?” :slight_smile:

Additional info:
This is running in a non-global zone (somewhat like LXC) with 17G free disk space and 4G RAM.

production.log Maybe I need to increase verbosity to get more data?

(This repeats forever)

Thanks,

–lgt

Anything more helpful in log/eye_processes_stdout.log?

Just these lines (several of them):

bundler: failed to load command: unicorn (/home/diaspora/diaspora/vendor/bundle/ruby/2.4.0/bin/unicorn)
bundler: failed to load command: sidekiq (/home/diaspora/diaspora/vendor/bundle/ruby/2.4.0/bin/sidekiq)

I just realized that the lines from log/eye_processes_stdout.log say “…/ruby/2.4.0/”,
but the installed ruby is 2.4.5. Is that a problem?

OK, the problem is bad_identity; this is really strange case, like timestamp of server was updated, may be need to reload eye (you should check the process logs ["/home/diaspora/diaspora/log/eye_processes_stdout.log", "/home/diaspora/diaspora/log/eye_processes_stderr.log"]), which is also a problem on linux with LXC because of a bug in kostya-sigar, which is similar to this (but the other way around, I think it’s because they fixed that bug, it’s now broken for LXC). So eye kills the processes again because the timestamp isn’t matching.

You can either run diaspora without LXC, or just don’t use eye (script/server) to start diaspora. If you start diaspora manually it should work:

RAILS_ENV=production bin/bundle exec unicorn -c config/unicorn.rb
and
RAILS_ENV=production bin/bundle exec sidekiq

Aha.

Interesting to see that LXC has/had similar problems.

I started manually per your suggestion, and it works better. Progress!

No crashing processes, but now log/production.log looks like this:
(page loads, but no assets)
production.log

Basically the dump of the call stack repeats for all requests for assets.
A quick check in the file system shows that (for example) “jquery_ujs-3689…9e.js”
exists in /home/diaspora/diaspora/public/assets directory along with all other pre-compiled files.

Thanks,

–lgt

In production, diaspora* does not serve assets. Your reverse proxy (nginx or apache) is supposed to do that.

Ah.

I hadn’t realized that.

My proxy lives in another zone, so I will have to experiment with
the apache2 configuration to make everything work.

That will have to wait for tomorrow.

Thank you for your time!

–lgt