I can’t seem to wrap my head around (Docker) containers and especially their maintenance.
As I understand it, containers contain a stripped-down OS that shares some resources with the host?
Or is it more like a closed-off part of the file system?
Anyway, when I have several containers running on a host system,
Do I need to keep them all updated separately? If so, how?
Or is it enough to update the host system, and not worry about the containers?
It’s built on the shipping container parallel. In order to transport objects you obfuscate anything not required for shipping a container.
Love the container analogy - immediately made so much sense to me! Also clarifies some misunderstandings I had.
I was mucking about with docker for a Plex server over the weekend and couldn’t figure out what exactly docker was doing. All I knew was that it’d make plex ‘sandboxed’, but I then realised it also had access to stuff outside the container.
This is their logo:
The whole container on a ship idea is their entire premise. The ship (docker) is a unified application/os layer to the host, in that containers can work plug-n-play with the docker base layer.
On a very specific note, I don’t run my Plex server in a container. I have a docker compose setup with 20+ apps, but Plex is on the bare metal OS because it’s kinda finicky and doesn’t like nas. You also need to setup the Plex API to claim the server as the container name changes. This is my stock Plex config if it helps
plex: image: lscr.io/linuxserver/plex:latest container_name: plex network_mode: host environment: - PUID=1000 - PGID=1000 - TZ=Etc/GMT - VERSION=docker - PLEX_CLAIM= #optional volumes: - /home/null/docker/plex/:/config - /x:/x - /y:/y - /z:/z restart: unless-stopped