Hi everyone!

I’m making a Docker version of my sharing server for ease of use and it works, but I would like to know if there are some “best practices” when it comes to shared folders.

The ‘problem’ is that the docker image is ran as root in its container, and the user runs as the local user, and they both need read/write access to this file.

So my setup is to create a folder where the file will live, created by the local user, and share it with a docker-compose.yml “volumes” command, and have user: “1000:1000” in there as well (with instructions to get the uid & gid).

This has to be done by the user before running the Docker image though, is there a simpler way?

I have seen groups, running docker in userspace and more, but it all seems so cumbersome. I just want a folder where both entities has read & write access.

  • Botzo@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 days ago

    I’ll probably come off as a crusader, but rootless Podman is a great way to accomplish this out of the box.

    Podman maps your user ID to root in the container, but you don’t need root (or a rootful socket) to run the container.

    Docker also has a rootless mode now, but I’ve found no reason to go back.

    • Valmond@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      And here I were trying to simplify things for people 😅, will check out though, thanks!

      • Botzo@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 days ago

        Once podman is installed (iirc the network package is marked as a dependency for most package managers) and your user is configured (provide subuids/subguids), I really think podman is a simpler model. The containers you run are actually yours (not root’s) and you don’t need to be part of a privileged docker group to run them. Of course, you can run containers as root with podman too: just use sudo.

        You’ll actually need to configure your user the same way for running docker in rootless mode, which should be the default.

        Your dockerfile will work with podman. Your docker-compose file will too (via podman compose). You’ll have access to awesome new capabilities like pods, and defining your containers with kubernetes style yaml, and running your containers via systemd.

        However, with rootless podman/docker, you should remove any/all of the USER silliness the rootful/default docker people do to protect themselves a bit from rogue processes effectively running as root and/or container escapes to root.

        • BrianTheeBiscuiteer@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 days ago

          Very little honestly. Since it prefers rootless execution it can be trickier to deal with at times but once you can usually figure out the tricks to get around that, rinse and repeat.

    • Valmond@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Thank you, very interesting link!

      So I’m on the right track it seems, but not there entirely. I always thought that the docker image was ran with the root in the image. If that makes sense. Like I make an image based off Ubuntu, and when I run that image on my Mint system, there is a Ubuntu:root user doing its work inside the container, but it has nothing to do with my Mint:root user. Seems it’s not the case and the user space is like merged with the host machines? Is this just Docker magic, and what I assume is correct?

      Another question, if I make, as per the article suggests, a “docker” user when I create my docker image, what happens if that user id already exist on the machine running the docker image?

      Cheers and thanks!

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        4 days ago

        There’s a common but persistent misconception that Docker is like running a virtual machine. This is understandable but incorrect.

        A better way to think of it is as a security wrapper around an untrusted process.

        If you look at your running processes whilst a container is running, you’ll see the processes inside the container running on your “host” machine - remember, it’s not a host - guest situation.

        There is no relationship between the user inside the container, unless you start mapping the UID and GID.

        The only exception to this is the root user which shares the UID/GID with the actual root user.

        See: https://www.howtogeek.com/devops/why-processes-in-docker-containers-shouldnt-run-as-root/

        Edit: I suspect, but don’t know for sure, that the root user inside the container is actually the same user as the one running the Docker process, which is typically the root user on the “host”.

          • Valmond@lemmy.worldOP
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 days ago

            More information 😅 thanks a bunch, this is marvelous!

            Edit: on a purely user perspective (not the real nitty gritty security perspective) would it be correct to treat a running image like it’s running on a completely other computer, with some magic glue to call it, start it etc? I will do dig down but it takes time.

            Edit2: so root is the same in the docker and on my system, but a docker defined user isn’t? How does that work, especially colussions of UIDs?

            Edit3: Connect is saving off more and more posts as I edit lol.

  • Rikudou_SageA
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    You can switch to the user as part of the Dockerfile, somewhere close to the end of it. Assuming uid 1000 is a pretty sane default, people with different setups will have to define themselves.