I noticed I had less than 100GB of free space so I thought maybe a btrfs balance might help, plus I hadn’t done one yet since I reinstalled.
So I started the balance proccses through the btrfs-assistant and after awhile my system crashed to the login screen ( I have auto login enabled so this was kinda weird)
Now I supposedly have only 1MB of free space on the disk and apps are crashing left and right.
Any ideas? Im thinking I should just switch to another FS but I like snapper…
If you have Btrfs snapshots that have captured major changes to the disk (adding or removing a lot of packages for example), those snapshots can potentially tie up a good amount of disk space. Try deleting some old snapshots, then run the balance operation again.
I didn’t realize that (I thought snapshots came at very little disk cost) I deleted all my snapshots, and that cleared up most of my drive. Created a manual snapshot so I can still go back if needed
There is a systemd unit called snapper-cleanup.service which is set to run every day by default. When it runs, it will delete any snapshots that exceed the thresholds defined in the Snapper configs. These thresholds can be set to anything you want, and thresholds can be defined for timeline snapshots as well (to retain a certain number of hourly, daily, weekly, and so on).
So, once per day when this service runs if you have more than ten snapshots in this subvolume it will delete the oldest ones until there are only ten left.
The thing is, the number of snapshots you are retaining does not define how much space they occupy. You can have a thousand snapshots that take up no disk space, or you can have one snapshot that fills up the whole disk. What determines how much space a snapshot takes up is how much the snapshot deviates from the current state of the subvolume.
When you first take a snapshot of a subvolume, that snapshot does not take up any disk space. It is not a copy of the disk or anything, it is more a way of telling Btrfs “Hey, remember this exact state the subvolume is in right now.” Moving forward from that snapshot, Btrfs will keep track of any changes that happen. At the moment you first take a snapshot, nothing has changed on the disk yet so there is no extra stuff to “remember”.
If you then delete 100GB of stuff off of the disk, that snapshot will “remember” that 100GB. Even though you deleted that stuff, the disk space won’t actually be released until the snapshot is deleted.
All that to say: the number of snapshots you are retaining does not determine how much disk space the snapshots are taking up. Even if you only have ten snapshots, if they represent major changes to the disk they can potentially occupy a lot of space.
One thing I am confused on is why a balance can cause you to LOSE disk space. From my understanding balance can help reclaim lost disk space when it is fragmented (which can happen when deleting a lot of data eg snapshots that held lot of diff data).
But it shouldn’t cause you to lose space correct?
As soon as I ran the balance operation I noticed I started losing tens of GB in disk space and it keeps going down.
Originaly I had 160 GB free space from 1TB and right now a few minutes into the balance I am at 45.44 GB
You lost space because the operation did not finish.
During a balance operation, it is normal to see an increase in disk usage temporarily. This happens because when Btrfs balances data, it first reads the existing blocks and then writes new copies of them to different locations on the storage device. The original blocks are not immediately deleted, which can cause an increase in disk usage until the balance operation is complete and the old blocks are removed.
Additionally, during a balance operation, Btrfs may also allocate extra metadata blocks for bookkeeping purposes. These blocks will be reclaimed once the balance operation is finished.
If the balance operation successfully completed, most likely it would have freed up space like you were expecting.
But it failed because I ran out of space. It happened again. The moment my disk ran out of space, everything crashed. I just left the balance running and kept an eye on it and my disk space
Do i need a certain amount of free space before being able to do the balance?
most of my space usage is in steam games on my home directory
I’d try moving some big files out of the way on some other storage (another disk or partition).
Since you are operating in critical conditions, do it from command line, and rather than moving (mv) first copy (cp) the files elsewhere then try deleting (rm) them. If deleting does not work because the disk is full (silly as it sounds I think that can happen) see if you can truncate them instead with : >/path/to/huge/file (works in bash and fish alike).
Of course put them back in place when done with the balance.
So I manged to free up about 400GB by deleting my steam library and that allowed me to finish the balance operation however now I only have 80 GB free and can’t reinstall my steam games
sudo btrfs subvolume list /
ID 257 gen 1478390 top level 5 path @home
ID 258 gen 1478390 top level 5 path @root
ID 259 gen 1478316 top level 5 path @srv
ID 260 gen 1478390 top level 5 path @cache
ID 261 gen 1478390 top level 5 path @log
ID 262 gen 1478347 top level 5 path @tmp
ID 263 gen 1478390 top level 508 path .snapshots
ID 508 gen 1478390 top level 5 path @
╭─zany130@Garuda in ~ via v3.11.6 as 🧙 took 16ms
╰─λ sudo btrfs balance status /
No balance found on '
yup no idea whats going on
btrfs filesystem usage
shows ( I deleted a ~50GB cache file from pcloud which I didn’t need guess thats why it went up)
I actually have more of this metadata than you at the moment, but my system is only setting aside 10 GiB for it while yours is holding on to a whopping 317 GiB.
This might be a long shot, but try shrinking down the filesystem by 1 GB like this:
sudo btrfs filesystem resize -1G /
Then expand it back to full size:
sudo btrfs filesystem resize max /
Do another balance operation when it finishes, then let’s take one more look at the output of sudo btrfs filesystem usage /.
sudo btrfs subvolume list /
ID 257 gen 1484345 top level 5 path @home
ID 258 gen 1482498 top level 5 path @root
ID 259 gen 1483729 top level 5 path @srv
ID 260 gen 1484336 top level 5 path @cache
ID 261 gen 1482498 top level 5 path @log
ID 262 gen 1482118 top level 5 path @tmp
ID 263 gen 1482429 top level 508 path .snapshots
ID 508 gen 1484345 top level 5 path @
╭─zany130@Garuda in ~ via v3.11.6 as 🧙 took 15ms
╰─λ sudo btrfs balance status /
No balance found on '/'
╭─zany130@Garuda in ~ via v3.11.6 as 🧙 took 16ms
╰─λ sudo btrfs filesystem usage /
Overall:
Device size: 931.22GiB
Device allocated: 807.06GiB
Device unallocated: 124.15GiB
Device missing: 0.00B
Device slack: 512.00B
Used: 556.52GiB
Free (estimated): 126.11GiB (min: 64.03GiB)
Free (statfs, df): 126.11GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:551.00GiB, Used:549.05GiB (99.65%)
/dev/nvme0n1p2 551.00GiB
Metadata,DUP: Size:128.00GiB, Used:3.73GiB (2.92%)
/dev/nvme0n1p2 256.00GiB
System,DUP: Size:32.00MiB, Used:112.00KiB (0.34%)
/dev/nvme0n1p2 64.00MiB
Unallocated:
/dev/nvme0n1p2 124.15GiB
still haven’t been able to solve this and just ran out of disk space again… I think should just reinstall and use a different filesystem that doesn’t eat 300GB of my disk space