Temporary system hangs/freezes when updating

Could you also provide mount | grep btrfs ?

1 Like

Here you go.

╭─[email protected] in ~
╰─λ mount | grep btrfs
/dev/nvme0n1p1 on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=256,subvol=/@)
/dev/nvme0n1p1 on /home type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=257,subvol=/@home)
/dev/nvme0n1p1 on /root type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=258,subvol=/@root)
/dev/nvme0n1p1 on /srv type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=259,subvol=/@srv)
/dev/nvme0n1p1 on /var/cache type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=260,subvol=/@cache)
/dev/nvme0n1p1 on /var/log type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=261,subvol=/@log)
/dev/nvme0n1p1 on /var/tmp type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=262,subvol=/@tmp)

mount | grep btrfs
/dev/sda2 on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=256,subvol=/@)
/dev/sda2 on /srv type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=259,subvol=/@srv)
/dev/sda2 on /root type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=258,subvol=/@root)
/dev/sda2 on /home type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=257,subvol=/@home)
/dev/sda2 on /var/cache type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=260,subvol=/@cache)
/dev/sda2 on /var/log type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=261,subvol=/@log)
/dev/sda2 on /var/tmp type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=262,subvol=/@tmp)
/dev/sda2 on /run/timeshift/backup type btrfs (rw,relatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=5,subvol=/)

Your timeshift is empty.

YES because I tested if quota and qgroups disabled would help.
And yes I am aware of that, since I deleted them and only did one 'snapshot' with no qgroups and quota so timeshift cant know the size. (At least that is how I understood btrfs and qgroups and quota)

I forgot I also tried to disable baloo file indexer, also with no results.
Also tried disabling Compositor.


There is no difference here to my mount options and I don't see the same thing (running linux-zen under a more vanilla Arch).

I can't get btrfs_cleaner to trigger after running either a defrag or a balance, so something else is going on.

Could you have a read of @torvic9's post here and see whether there's anything obvious in terms of filesystem usage?

╭─[email protected] in ~ took 9s
[⚡] × sudo btrfs filesystem usage /
[sudo] password for eha:
Device size:                 914.30GiB
Device allocated:            150.03GiB
Device unallocated:          764.27GiB
Device missing:                  0.00B
Used:                        147.81GiB
Free (estimated):            766.13GiB      (min: 766.13GiB)
Free (statfs, df):           766.13GiB
Data ratio:                       1.00
Metadata ratio:                   1.00
Global reserve:              288.83MiB      (used: 0.00B)
Multiple profiles:                  no

Data,single: Size:148.00GiB, Used:146.14GiB (98.74%)
/dev/nvme0n1p1        148.00GiB

Metadata,single: Size:2.00GiB, Used:1.67GiB (83.47%)
/dev/nvme0n1p1          2.00GiB

System,single: Size:32.00MiB, Used:16.00KiB (0.05%)
/dev/nvme0n1p1         32.00MiB

/dev/nvme0n1p1        764.27GiB

No big delta for me.
I run btrfs balance start -musage=50 -dusage=50 / now and then since I thought this might help with my freezes.

1 Like

I was experiencing that for a day or two a few weeks back and then it went away. No idea what changes I did to resolve it because I'm constantly tweaking my system. I could have fixed it myself or it may have even simply resolved on an update.

Some of the things I played with were switching to the Zen kernel, I disabled the the Garuda performance-tweaks, disabled qgroups for btrfs, deleted all my timeshift snapshots, probably lots of other stuff that I don't quite remember. Think I might have written some udev rules and services as well. I'm always changing stuff and if I don't keep track of it all then my memory gets fuzzy, sorry I can't be more specific.



for i in /sys/block/*/queue/scheduler; do echo "$i: $(cat $i)"; done

(there's probably a better golf score for that, but hey)

1 Like
╭─[email protected] in ~ took 5s
╰─λ bash -c 'for i in /sys/block/*/queue/scheduler; do echo "$i: $(cat $i)"; done'
/sys/block/nvme0n1/queue/scheduler: [none] mq-deadline kyber bfq
/sys/block/nvme1n1/queue/scheduler: [none] mq-deadline kyber bfq
/sys/block/sda/queue/scheduler: mq-deadline kyber [bfq] none
/sys/block/sdb/queue/scheduler: mq-deadline kyber [bfq] none
/sys/block/zram0/queue/scheduler: none
/sys/block/zram10/queue/scheduler: none
/sys/block/zram11/queue/scheduler: none
/sys/block/zram12/queue/scheduler: none
/sys/block/zram13/queue/scheduler: none
/sys/block/zram14/queue/scheduler: none
/sys/block/zram15/queue/scheduler: none
/sys/block/zram1/queue/scheduler: none
/sys/block/zram2/queue/scheduler: none
/sys/block/zram3/queue/scheduler: none
/sys/block/zram4/queue/scheduler: none
/sys/block/zram5/queue/scheduler: none
/sys/block/zram6/queue/scheduler: none
/sys/block/zram7/queue/scheduler: none
/sys/block/zram8/queue/scheduler: none
/sys/block/zram9/queue/scheduler: none

Oh ya, I was also playing with disabling USB auto suspend as well. Too many changes to remember, as I say I'm constantly tweaking stuff.


BFQ can be temperamental sometimes, but an NVMe drive should work with none.

OK, so maybe it's time to try @tbg's approach and start disabling things and cleaning out... :confused:

:sob: But most of what @tbg described in his post and other topics I tried already.

Btw. how does one disable the garuda performance-tweaks?

I kinda want to try reinstalling Garuda and see if the same stuff happens again.
But this would be my last resort.

1 Like

Settings / Tweaks un check


Uhm.. am I blind? I don't see any Tweaks . . .


Garuda Assistant might be what I am looking for

They are disabled...

1 Like




Yes got it, but they are disabled already.
So I try enabling them I guess :smiley:

1 Like

IDK, my system work fine:

I masked all the services individually with systemd mask command.

1 Like

I think there might be some bigger issue with my system.
While chatting here I had my iotop -aoP -d 0.5 open and check this out

339 be/4 root        116.83 M   1053.38 M  0.00 %  2.39 % [btrfs-cleaner]
340 be/4 root          4.95 M    906.86 M  0.00 %  0.03 % [btrfs-transacti]

Maybe some one could observe iotop -aoP -d 0.5 for some time as well and see if they have the same btrfs-cleaner and btrfs-transacti behavior.

22m34s of iotop monitoring to have a better reference.

sudo dmesg | grep -i btrfs