Skip to main content
Version: 0.0.2

Docker

Virtualization technology that allows running processes in a sandboxed container without interferring with the existing operating system. This is currently used to download prebuilt os images to run. I first used Docker solely for work, so mostly in building out images to automate and setup a side of the stack. Prebuilt images in addition to using docker compose allowed for building out really easy and reproducible configurations.

Compose

Images

Originally used images without settings tags (defaults to latest) but found that it's actually beneficial to keep it static.

warning

authelia image promoted to a newer release during a reboot and caused services to go without authentication due to a different config format being used.

Volumes

Mounting volumes is pretty easy, with both shorthand and long syntax. For docker mounting both single and folder mount were very easy.

Ports

Writing port mapping in docker compose files can get confusing when using shorthand syntax. I prefer using long syntax to clearly indicate what ports will be mapped from the container to the source. In this form:

source: CONTAINER_PORT
target: HOST_PORT
protocol: tcp/udp/both
mode: host

Networks

When deploying containers, if not defined, are automatically deployed to a network that docker auto creates. In order to allow services to communicate to services in other files they need to all exist within the same network. In order to solve this I usually define a 'global' network called docker-network that I would add the service to if they needed to communicate with other services outside of the compose file. You are able to define multiple networks for a service.

Another cool feature with networks is that you are able to run a specific service within the network of another by using network_mode: host | service:container_name. This would allow you to run the container using the host network or run it in the same network as another service. The latter comes in handy when running VPN containers as you can now run a service under the VPN network.

One annoying thing that has happened is with many different services it's very easy to exhaust the available networks provided from docker. To manage this I have set up custom networks for each service file to avoid this issue.

Include

This was something I never had a reason to learn until I reverted back my infrastructure from kubernetes to docker, but it helped a lot when I had to create the same services for almost all of my compose files. Using include you can use a central configuration that would be merged into a docker- compose.yaml. This helped out when creating init-service and backup containers for all of my different compose files. Using environment variables I was able to modularize the file to allow for creating unique containers, even though they all ran almost exactly the same.

Running as Non-Root

GPU Passthrough

Usually with GPU passthrough in many containers such as those found in hotio and linuxserver images it's fairly easy to passthrough media devices. All you need to do is define privileged: true and passthrough the device with device: /dev/dri:/dev/dri and run the image as root and hotio|linuxserver init scripts will handle group configuration and downgrading the user to the appropriate UID and GID based on the environment variables passed in. As I grew a better understanding of how these images are managed I realized they aren't needed at all. Using the original images for these services and simpling following these steps:

  • Add the non-root user to the render and video groups

  • Get the group ids for the render and video groups with this command:

    cat /etc/group | grep GROUP
  • Add this to the docker-compose file:

    group_add:
    - ${RENDER_GID}
    - ${VIDEO_GID}

With that everything is set for full transcoding capabilites and allows the service to run with a non-root user.

note

Since Arc on Debian is still not the most up to date, ensure you can run the sudo intel_gpu_top (from the intel_gpu_tools package). If that fails make sure you have firmware-linux-nonfree, firmeware-intel-graphics, and mesa-va-drivers installed!

Volumes

This was another nice thing that linuxserver and hotio init scripts would manage that I have essentially recreated with a basic init-container setup that creates the necessary folders for a container before the service runs. This allows creating necessary folders and then taking ownership of them based on the non-root UID and GID values. This has lead me to slowly remove linuxserver and hotio images where possible.

Swarm

Multi node docker configuration that emphasizes high availability. Dived into this when I purchased a second computer to add to the media server setup. This was fairly easy to pick up on as it is using the same docker compose yaml configurations with just a couple of new fields to determing node placement, restart policy, etc.

note

Due to network perfmormance issues and the possibility of becoming deprecated in the near future this was no longer used.

Node Selectors

Using node selectors was the only new thing compared to compose that I had learn. You needed to create selectors to specify a specific node, or just use master or worker nodes to specify how to distribute the service.