Hello! I want to shrink my btrfs Garuda partition using Gparted on a live cd, but the "btrfs check" command Gparted runs before applying any operations, fails with the following message:
Opening filesystem to check...
Checking filesystem on /dev/nvme0n1p5
UUID: 1e6dced2-7257-4b09-bc7a-ff320fb0b1ed
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups
Counts for qgroup id: 0/332 are different
our: referenced 8713555968 referenced compressed 8713555968
disk: referenced 8713555968 referenced compressed 8713555968
our: exclusive 1176907776 exclusive compressed 1176907776
disk: exclusive 1148502016 exclusive compressed 1148502016
diff: exclusive 28405760 exclusive compressed 28405760
found 16871546880 bytes used, error(s) found
total csum bytes: 15483652
total tree bytes: 658964480
total fs tree bytes: 606420992
total extent tree bytes: 31948800
btree space waste bytes: 110815280
file data blocks allocated: 19913306112
referenced 27457921024
Everything seems fine while using Garuda though, and the installation is only a month old. Could the zstd compression have anything to do with it? Has anyone experienced a similar issue? Should I proceed executing "btfs check --repair"? Thanks in advance!
PS: according to this conversation, it seems like the error is not important (why return 1 then?).
Well, I quote āfound 16871546880 bytes used, error(s) foundā. Also, from the btrfs-check man page: ābtrfs check returns a zero exit status if it succeeds. Non zero is returned in case of failure.ā. I canāt seem to be able to find any more info about btrfs-check or look at the source.
Good catch! Thatās probably worth mentioning to the btrfs devs. Also, I cloned the partition (cause I was scared of running --repair on my daily driver), and after the āārepairā, 0 was returned and no error(s) were reported, with no apparent loss of data (again, it was tested on a cloned partition).
Edit: After a bit of chatting on the btrfs IRC channel, the output seems correct; the error is reported by the line ādiff: exclusive 28405760 exclusive compressed 28405760ā. ā0/332ā does not mean ā0 out of 332ā, but it is rather an id for a quota group that corresponds to a subvolume. Fun fact, they called me brave for having quotas enabled, which is Garudaās default I think (since I havenāt changed anything). So perhaps, you should reconsider having them enabled by default.
There is an opened issue in Timeshift's GitHub repo regarding this:
But till now I didn't have any such issue like deleting snapshot takes hours, maybe because I didn't have too many snapshots
I also noticed that Timeshift deletes some autosnap snapshots while taking new autosnap, by default, preventing too many snapshots. So due to this setup, we are not prone to this bug and don't need to worry
I don't know.
It makes sense in general.
Maybe someone knows better
For sure I remember several posts complaining or solving problems disabling quotas.
But I think quotas make more sense considering timeshift...