Fresh install Docker cgroup mountpoint does not exist

λ docker version
Client:
 Version:           19.03.14-ce
 API version:       1.40
 Go version:        go1.15.5
 Git commit:        5eb3275d40
 Built:             Tue Dec  1 23:20:14 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.14-ce
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.15.5
  Git commit:       5eb3275d40
  Built:            Tue Dec  1 23:14:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b.m
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Running a docker build -t brutalbirdie/some-app:0.6.2 . returns

 ---> Running in 6f9f3d9617a9
cgroups: cgroup mountpoint does not exist: unknown

How To Fix This?

ps: this is just a documentation topic I will post my 'fix' after that.

I ran into similar issue found a temp fix on another place.
Temp fix:
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

A better version which is not temporary:

LXC (or other uses of the cgroups facility) requires the cgroups filesystem to be mounted (see §2.1 in the cgroups kernel documentation). It seems that as of Debian wheezy, this doesn't happen automatically.

Add the following line to /etc/fstab:

cgroup /sys/fs/cgroup cgroup defaults

For a one-time thing, mount it manually:

mount -t cgroup cgroup /sys/fs/cgroup

Abridged to this docker problem

Create this folder

sudo mkdir /sys/fs/cgroup/systemd

Mount it

sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

add this to your /etc/fstab

cgroup    /sys/fs/cgroup/systemd    cgroup    defaults

Reboot


ps: There is still the issue where the created folder /sys/fs/cgroup/systemd gets deleted after reboot.
Creating a simple on start task with systemd that checks the folder and mounts it could be a solution but feels wrong.
Will investigate if there is a better solution.

5 Likes

Thanks for the solution :slight_smile:
We might want to make this default? @librewish

5 Likes

We use cgroup v2 and docker does not support it yet

6 Likes

you can disable it in garuda-boot options

7 Likes

Was facing this too so it might be useful
Downsides of disabling cgroups v2? Wasnt that needed for memavaild?

4 Likes

yes its needed by memavaild
thats why we enable it by default

6 Likes

I started to use Garuda KDE Dr460nized this Tuesday thanks to DistroTube and looking forward to document my issues after the migration from Manjaro.

It's also awesome to see such responsiveness from the community and devs.

and by the way thanks for this gorgeous distro which is also based on my all time favorite Arch Linux.

7 Likes

i think podman supports cgroup v2

6 Likes

you can install

sudo pacman -S podman-docker

and use it with

docker commands

5 Likes

Yes but No :smiley:

Since part of my job is development with docker I do not want to make another switch.
I remember 2-3 years ago when I started forcing my coworkers running projects in docker.
And yes it was literally force.

I was sick of 'But with my PHP/Apache/NGINX (place software X here) version it worked!'
Not to mention CI/CD and automated testing with docker. yada yada

I know that I could replace my local docker with podman, but I imagine some of my coworker will come to me with some docker problem and if I tell them 'Ohh I use podman now' they will strangle me right then and there.

But for another user which has no need to stick with a specific software this could be a good solution.

6 Likes

If you want to document things anyway, what about you write these directly to the wiki & link them here? :slight_smile: The wiki needs some love :grin:

I do like docker as well. Its not only that, the option to have reproducible setups and all config files in one directory is the selling point for me :smiley:

5 Likes

I am a little confused by this.
Is this the 'wiki' Garuda Linux FAQ ?

And also what is the workflow you propose here?
Write my solutions/problems as a comment to Garuda Linux FAQ ?

2 Likes
3 Likes

Ahh ok.
Yes I could do that. But I guess I have no permissions to do that :slight_smile:

2 Likes

Nice! :slight_smile: I pm'd you some information concerning that.
Thanks for participating!

4 Likes

All right, thanks for that.
I will add that to the wiki.

6 Likes

@BrutalBirdie

I am having the same issue, this solution didn't work for me.

I'm a linux noob so I figure I am doing something wrong?

cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
# <file system>             <mount point>  <type>  <options>  <dump>  <pass>
UUID=70A3-BF56                            /boot/efi      vfat    umask=0077 0 2
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /              btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 1
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /home          btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 2
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /root          btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 2
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /srv           btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 2
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /var/cache     btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 2
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /var/log       btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 2
UUID=c1fc10ac-4c60-4f06-a718-ce838e6d1146 /var/tmp       btrfs   [email protected],defaults,noatime,space_cache,autodefrag,compress=zstd 0 2
UUID=c1ed9087-3378-4246-94ca-524fbf79a5f1 swap           swap    defaults,noatime 0 0
cgroup    /sys/fs/cgroup/systemd    cgroup    defaults

Did you reboot your system after that and did you run this?

sudo mkdir /sys/fs/cgroup/systemd

ps updated the fix again so the last part contains the mkdir part

4 Likes

Yep. but for some reason, when I reboot, the /sys/fs/cgroup/systemd directory no longer exists.

Not sure why.

1 Like