How to configure database access for docker image?

I’m trying to use this docker image to set up a pod on my server, which is running Ubuntu 20.04.

I’m not completely understanding the requirements, even after reading the installation instructions on the wiki.

First of all, does diaspora* require either PostgreSQL or Redis, or both PostgreSQL and Redis?

Secondly, it seems that the database(s) are to be configured on the host, not inside the docker container, which makes sense because we want that data to persist beyond the life of the container. However I’m not seeing how the process inside the container can access the database on the host.

The only port that seem to be exposed in or out of the container is the application port on 3000. So if you use redis://redis or localhost to access Redis, I can’t see how that won’t get stuck trying to talk to something inside the container. I don’t see anything at all that tells the process where to find PostgreSQL, only username / password. That won’t be too helpful if you don’t know where to connect.

I see several volumes related to pgsql data being mounted, but I don’t understand what that data is. I thought that the database was written to over a socket, not direct file access. And even if that’s true, it seems like I would want my database to be in a more general location so I could use it for other services too.

Finally, in general, I’m new to hosting databases. The link to setting it up on the wiki is dead. Does anyone have an updated guide? Maybe that gap in knowledge is causing all of the confusion above.

diaspora* doesn’t provide support for installations other than those using the official installation guides.

However, let’s invite @koehn, who created and who maintains the Docker image you’ve used, and who will be able to help you.

The Docker container I configured requires both Redis (required for all diaspora installations) and Postgres.

You can run both Redis and Postgres inside Docker containers and easily connect them with Docker Compose. Data can be stored in Docker volumes or on the filesystem.

Docker Compose sets up a network that solves this for you. You’ll want to familiarize yourself with it. You can find some instructions on how to do this on my GitLab. Fair warning: the version of Postgres should be 13.1; I haven’t updated that compose file in some time…

Thank you so much for your time!

I will go ahead and try to follow those instructions and report back how it goes. I read them initially, but I wanted to make sure I understood things before I tried to execute it.

Dockerizing postgres and redis is probably a good idea, thanks for the suggestion!

I’m going to use the official images of both services.

Ah, wait, I understand. It seems that the image will automatically create containers for postgres and redis. I did not see that initially. I will try it!

Yes, the compose file will give you diaspora, redis, and postgres, based upon the official images. If you need a web server, you can add nginx or apache as needed, again from one of the official images. I didn’t include them because many people (myself included) already have a webserver for other sites.

Alright, I’ve made some progress, but I think I’m going to need to call it a night.

I’m very new to nginx and docker-compose, I’m sorry for my ignorance.

I’ve read about how compose manages networking, so I think I understand what’s going on. The containers are up and running, but no ports are available. The only thing that can see the diaspora web server would be another container in the compose environment.

I can’t figure out how to solve this problem. Right now I run nginx on my host normally, I didn’t see a lot of benefit to dockerizing that. And you’re right, I do have other sites running through it.

But even if that wasn’t true, I don’t want to start my nginx container as part of this environment, because I may need that for other sites in the future.

In the example on the wiki, diaspora is exposed via a unix socket, but I don’t see anything like that, so it seems like the intent is to use a network socket.

I have found a few articles that seem to address the problem, (here and here) but I’m still not seeing the solution. In the first one, the port still isn’t available over the docker0 interface, and the second one doesn’t actually offer a solution, it just says “a solution exists”. Finally, on that second one, I don’t think I actually want that, I don’t want to expose everything, just port 3000. It seems wise to minimize the chances of services colliding.

Can you explain in more detail how the host (or nginx) is supposed to be able to see the port?

Thank you again so much for your time and patience.

Congratulations for sticking with it! Learning a new environment like Compose can be confusing at first.

In my compose file, on lines 12-13 are a ports declaration. This is exactly equivalent running docker run -p 3000:3000…, in that it forwards traffic from the host’s network on port 3000 to the diaspora container on port 3000. Without that port forwarding, there’s no way for the host to route packets to the container, although packets coming from the container are routed outbound over the host’s network. For Redis and Postgres, that’s ok, because there’s no need for them to be accessed directly by the host, but for nginx to forward requests to diaspora, the host needs to route those requests somehow; which is what the ports section enables.

Once the containers are started (diaspora takes a minute or two to load; you can watch the log file to see its progress), you should be able to curl localhost:3000 and get a response. If it doesn’t, check the log and adjust the diaspora.yml and database.yml accordingly (or ask here).

Once that’s done and diaspora is running, you should work on configuring your nginx to work its magic.

Got it, okay, that makes good sense.

For testing purposes, I cleared my Firefox cache, shut down nginx and all other services I run through it.

I renamed the example to diaspora.yml, edited the URL, pod name, enabled registrations, and configured my mail settings (I use MailGun if it matters, probably not). I set a password in database.yml and docker-compose.yml.

When I run docker-compose up -d, things seem to start without errors.

$ docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED         STATUS                PORTS                                       NAMES
5fa4bd6422a8   koehn/diaspora:latest            "/bin/sh -c ./startu…"   6 minutes ago   Up 6 minutes          0.0.0.0:3000->3000/tcp                      compose_diaspora_1
06253bd2d942   postgres:10-alpine               "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes          5432/tcp                                    compose_postgres_1
87121de3d3a0   redis:latest                     "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes          6379/tcp                                    compose_redis_1

I can also use netstat to see that the port is open.

$ netstat -l | grep 3000
tcp        0      0 0.0.0.0:3000            0.0.0.0:*               LISTEN

However, when I curl localhost:3000, I get “connection reset by peer” the first time, every subsequent time I don’t get any output.

When I connect with Firefox, I just get “unable to connect”.

I went in search of logs, but I don’t see any. I checked in the compose directory, the parent, and in /var/log. I don’t see any of the mounted folders either, which I found suspicious. I expected to see docker-images, postgres, postgres-run, and redis, but I’m not seeing any.

I think tracking down the log file seems like the best path forward, but I’m happy to try anything you suggest.

@koehn, I made some progress!

I discovered that I need to set my proxy to modify the header information. Now my nginx config looks like this.

server {
    listen 443 ssl;
    server_name diaspora.domain.org;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header    Host                $host;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;
        proxy_set_header    Accept-Encoding     "";
        proxy_set_header    Proxy               "";
    }
} 

Now, I can access the page at https://diaspora.mcfallsfamily.org. However, only the main HTML document loads, the stylesheet and all other resources 404. I can tell it’s reading my config because it has the name of the pod that I set at the top. But the page content makes it seem like things are not set up.

It looks like this.


Welcome, friend.

You’re about to change the Internet. Let’s get you set up, shall we?

Configure your pod

Open config/database.yml and config/diaspora.yml in your favourite text editor and carefully review them, they are extensively commented.


I’m not sure what to do to resolve the issue. Looking through the config, I think this might be what’s missing.

assets: ## Section

      ## Serve static assets via the appserver (default=false).
      ## This is highly discouraged for production use. Let your reverse
      ## proxy/webserver do it by serving the files under public/ directly.
      #serve: false

      ## Upload your assets to S3 (default=false).
      #upload: false

      ## Specify an asset host. Ensure it does not have a trailing slash (/).
      #host: http://cdn.example.org/diaspora

I haven’t touched anything in this section, it’s all default. It seems like it’s expecting me to configure nginx to serve the static content like stylesheets and scripts, but I’m not sure what I would need to do to make that happen.

Maybe I’m barking up the wrong tree.

Can you give me any advice?

Thank you!

@koehn I don’t want to bother you, so this is the last time I’ll ping you on this.

I did get it working by changing serve to true in diaspora.yml. I don’t expect to host more than a few dozen users so hopefully this should work out okay.

But I suspect there should still be a way to get the assets served correctly. I modified the compose file to mount the public folder as a volume but I still couldn’t make any progress. Inside the docker container, I could see all the files, the assets, 404 page, etc. But in the mounted directory, I couldn’t see anything.

Hopefully if I hear back or this becomes an issue I can back up my database and set everything right.

I finally got this solved! I had to mount the entire public directory to my host, and then point nginx to that as the site’s root. Now it works!

FWIW, I now include a copy of lighttpd in the Docker Image that will serve the static content itself on port 8080.

Hey @koehn , any chance of bringing your site back up? Or bringing the info to the official github?