Only one host for production environment. What to use: docker-compose or single node swarm ?

Question

We have recently moved all our corporative services to run ONE DigitalOcean server having all services in a docker environment: redmine, dokuwiki, opends, mattermost, a docker registry, portainer,…

The way we did it, was creating all the needed docker-compose files (one by service, and having all the needed containers in each one: RoR+postgresql, Node+Mongo+redis,…), add all the needed mountpoints for the volumes (and almost all containers must be persistent), and include the option in all of them with “restart: always”.

All this apps were started with ‘docker-compose -d up”, and in this way this only ONE server is able to run all services (and all of them get started with server startup). We don’t need a cluster right now.

We don’t know if this approach is a good one or it shouldn’t be used for production (and why in this case). We want to have one server to pay the less as possible, and taking into account that it can manage all our apps. Should we create a swarm, move all containers to be swarm services, but only have one manager and no workers? I would be that approach a better option?

If this is true, what should we use to replace the use of jwilder/nginx-proxy (and docker-letsencrypt-nginx-proxy-companion) to manage http redirections and automatic generation of letsencrypt certificates.

Thanks in advance!

Response

I always recommend single-node Swarm with the assumptions you know the risks of a single node of anything, and you’re backing up persistent data, keys/secrets, etc.

My top reasons for a single-node Swarm over docker-compose:

  • It only takes a single command to create a Swarm from that docker host docker swarm init.

  • It saves you from needing to manually install/update docker-compose on that server. Docker engine is installable and updatable via common Linux package managers (apt, yum) via https://store.docker.com but docker-compose is not.

  • When you’re ready to become highly-available, you won’t need to start from scratch. Just add two more nodes to a well-connected network with the 1st node. Ensure firewall ports are open between them. Then use docker swarm join-token manager on 1st node and run that output on 2nd/3rd. Now you have a fully redundant raft log and managers. Then you can change your compose file for multiple replicas of each of your services and re-apply with docker stack deploy again and you’re playin’ with the big dogs!

  • You get a lot of extra features out-of-the-box with Swarm, including secrets, configs, auto-recovery of serivces, rollbacks, healtchecks, and ability to use Docker Cloud Swarms BYOS to easily connect to swarm without SSH.

  • Healthchecks, healthchecks, healthchecks. docker run and docker-compose won’t re-create containers that failed a built-in healthcheck. You only get that with Swarm, and it should always be used for production on all containers.

  • Rolling updates. Swarm’s docker service update command (which is also used by docker stack deploy when updating yaml changes) has TONS of options for controlling how you replace containers during an update. If you’re running your own code on a Swarm, updates will be often, so you want to make sure the process is smooth, depends on healthchecks for being “ready”, maybe starts a new one first before turning off old container, and rolls back if there’s a problem. None of that happens without Swarm’s orchestration and scheduling.

  • Local docker-compose for development works great in the workflow of getting those yaml files into production Swarm servers.

  • Docker and Swarm are the same daemon, so no need to worry about version compatibility of production tools. Swarm isn’t going to suddenly make your single production server more complex to manage and maintain.

There’s more, but that’s my big ticket heavy hitters!