Usability problem with updating in garuda/arch

Hi Garuda folks,

[not sure if this is an issue or feedback with feature request, therefore I flagged the latter.]
I am using Garuda for some time now but updating still is the most inconvenient and dangerous thing for me, usually requiring manual memory checking everytime and memory cleaning if applicable, and I am not happy with the frequent manual effort.

(Details: My root partition is unfortunately limited to 48GB at the moment. My cleaning process is to clear all of the pacman cache using sudo pacman -Scc (because this simply works, maybe “clear package cache” in Garuda Assistant would help too) and half to 3 quarters of old snapshots, which, in total, frees half of 46GB of used memory in the root partition.)

In my Dra6onized Plasma KDE version, updates do not abort when there is insufficient memory in my root partition for the whole update, e.g. if I accidentally trigger a full update instead of only updating one package. Then the failing update process simply breaks my system, silently! The kernel binaries vanished as well as other updated packages. (This is a fact, not an allegation.) Fortunately I am prepared, I actually need to restore a snapshot. I am using the Btrfs Assistant. But this trouble seems quite unnecessary and hurting usability for normal people I guess.
Old arch post describing the same problem.

Would it be possible to have a simple option or feature for garuda-update, that the update is aborted beforehand, if the worst case amount of memory requirements does not fit into the partition with a certain margin? It would help very much.

Or does this option exists already?
It would also be very cool, if it could directly offer the choice of clearing the package cache and retrying before actually the abortion as a last resort.

[Annoying question: is there really no alternative to this kind of fragile big bang integration of a big update?]

Just saying, because everything works great for me in Garuda, all software is cutting edge, and there is a rolling release, but this is the only issue bothering me for most of the time I am using Garuda.

Thank you for your understanding.


The default behavior is for an update to abort if there is not enough space. This function is provided by the CheckSpace option in /etc/pacman.conf, which is enabled by default:


Performs an approximate check for adequate available disk space before installing packages.

It says it’s an approximate check, but: yes, the feature already exists.

I have my doubts about this claim; I read through your previous issue here and it seems like there was a misconfiguration with your filesystem. But if what you say is accurate and you are able to reproduce the issue, then you should submit a bug report as mentioned in the pacman.conf man pages:


Bugs? You must be kidding; there are no bugs in this software. But if we happen to be wrong, submit a bug report with as much detail as possible at the Arch Linux Bug Tracker in the Pacman section.

I think what you are describing is unlikely to be implemented, because as I mentioned the default behavior is to cancel the transaction if there is not enough space. At that point the user can decide what to do. I think a lot of folks would not appreciate the tool intervening any further than that.

If you would like, it is very easy to automatically clear the cache before you update or install packages by setting up a hook. For example, if you would like to clear all but one package version from the cache before each update, you could set it up like this:

sudo micro /etc/pacman.d/hooks/paccache.hook

You can call the file anything you’d like, as long as it ends in .hook.

Paste into the file:

Operation = Install
Operation = Upgrade
Type = Package
Target = *

Description = Prune package cache before updating
When = PreTransaction
Exec = /usr/bin/paccache -rvk1

Each time it runs, it will give you a summary of how much space it saved. With the v option it will list out each package it is deleting as well (you can take the v out of it to reduce the verbosity of the hook if you’d like).

Annoying answer: for most users the update routine is not fragile. :eyes:

If you are running into issues with large updates, consider updating more frequently. If you update every day, you will never have large updates. :bulb:


Regarding “abort if there is not enough space” failing to prevent a borked system, once I hit this problem too.
My fault to be honest, because I shrinked the root partition too much (I wanted to make room to install Hyprland and then try the “multiple Garuda on subvolumes” thing with my current Sway, which I never found time to actually do).

By the way, the error may also come from btrfs needing a “balance” rather than actually no space left.

While at it (unrelated), I noticed that when the update runs in two steps (update of garuda-update or keyring) the snapshot hook is skipped for the first step. Would it make sense to also skip a few others, like searching for .pacnew, orphans, and packages needing rebuild?

Thank you very much for your helpful answer, BluishHumility :slight_smile: . I am sorry for my naive guessing. Now I think, it could be a space computation problem in my device. A much smaller usability problem and a bigger issue than I thought.

(As you suggested, I am updating (almost) every day I use Garuda which I also found helpful.)

I see, the CheckSpace flag is activated, indeed (would be weird, if that feature didn’t exist). Some time ago, I tried enlarging my root partition, after I moved the swap partition, using KDE Partition Manager, yeah, it did not go smoothly but I could fix the /etc/fstab file and you guys (the one with the penguin avatar) helped me to resize the btrfs partition to the new full size, using the btrfs resize max command.

Maybe, this past process is related to this problem and my root partition space computation is not successful or wrong and therefore the check fails everytime without error or warning message. Hm, this really must be a problem on my site, otherwise you would have known it already.

I’ll try to figure out, I might ask pacman and BTRFS people, and I will definitely try to make a hook as you suggested.

Have a nice day!

I think it’s because zstd compression on btrfs, there’s no way to know in advance how much it will be able to compress data that is yet to become available, so it has to guess “on average, it’s going to be 4x” and then, already compressed packages come down that can’t be compressed much further and will expand when installed.

I don’t currently know, if my problem is related to compression. I executed btrfs property get / compression but the output was empty. I tried getfattr -n btrfs.compression / but it returns /: btrfs.compression: No such attribute. Now I eventually asked ChatGPT who suggested btrfs filesystem df / and the output does not contain any compression property however.

I assume, my partition is uncompressed.

Indeed property get has nothing to say, but

mount -t btrfs
/dev/sda3 on / type btrfs (rw,noatime,compress=zstd:3,space_cache=v2,subvolid=1382,subvol=/@)

and now I don’t know which one is right?

Yeah, this technique shows for the partition and all subvolumes a zstd level 3 compression. Well ChatGPT couldn’t help. If this issue would be compression related, wouldn’t other people experience the same?

I have been looking at the library code for Pacman, it has two different space checks, one for the caching (when it is downloading, check_downloadspace) and one for the installation (when a package or version is “added”, check_diskspace).

Space requirements for the cache mount point are computed by computing the required memory blocks for each downloaded package’s size (+1 block buffer) and summing them together.

Space requirements for the updated mount points is slightly more complex.
The memory blocks of “removed” (or replaced) packages’ files are deducted from the space requirements for the associated mount points all at once. (Without knowing the exact sync procedure of Pacman, this doesn’t make automatically sense, it only makes sense if these packages are deleted all at once.)
For other packages, it takes the maximum of space requirements that appears at any point of time in the ordered sequence of package additions.
Already cached packages, being re-installed, are deducted from the requirements while space requirements are increased for installations each time. Space for installed packages is computed almost identically as space for removed packages, but the installed packages might associate package files with a different mount point when the file name starts with a dot.
Funny, (hard?) links and directories are assumed to be of zero size (“libarchive” would do the same).

As I suggested, they already implement “cushion” for the space requirements of individual mount points in the final check. The comment correctly says " cushion is roughly min(5% capacity, 20MiB)", i.e. at most 20MiB for any practically sized partition.
The final check is merely comparing the determined blocks (with cushion) against the free blocks for each involved mount point separately.

As I feared, the count of maximally required blocks per mount point can be negative (blkcnt_t) and then the check quits without any complaint. I tried to find debug messages in both /var/log/pacman.log and with journalctl | grep '^Nov 30' | less but could not find them.

I should ask in the Arch Linux Forum.


That’s good research! It is an interesting clue that both you and meanruse have run into what sounds like the same issue after resizing partitions/filesystems. :thinking:

I don’t think you should ask in the Arch forum, I think it would be better to raise an issue on the GitLab issue tracker. You can raise an issue against a specific package (Pacman). That way folks who are actually working on Pacman can take a look and determine if Pacman is not handling the CheckSpace option properly. Or if it is not an issue with Pacman, they may be able to help identify why it’s not working on your system.

A complicating factor is they appear to have disabled account registration, except by email:

If you are willing to jump through that additional hoop I would encourage you to go for it.

It sounds like you have a lot of good info so far to put together a detailed bug report, but be sure to have a read through the bug reporting guidelines beforehand so you know what the expectations are: Bug reporting guidelines - ArchWiki


Hey there,

before I read your answer, I posted in the Arch Forum (yes Gitlab problably would have been better). And a user came up with this unsolved issue with Pacman:

Yeah, it is related to Snapper snapshots (because the snap-pac hooks use Snapper).

The solution I see now is to clean snapshots (ideally in an automatic way) when remaining space is too scarce for all the files.

I’ll try to make a shell command to check for the condition and let you know.


Aha, the smoking gun. Yes, the linked discussion makes perfect sense now that it is explained:

“In a setup like the one used by snap-pac, which creates pre- and post-upgrade snapshots on btrfs partitions, deletion of a file does not free any space; this only happens once the snapshot is deleted.”

Pacman is anticipating the space being released when the package is deleted, but it is not because the old version is captured in the snapshot (and all preceding snapshots with that package version).

I agree with this suggestion in one of the comments:

“An option to just not count any of the data removed (ie calculate_removed_size() return 0) would prevent this issue.”

On systems which use pre-update snapshotting, it would be better to assume no space will be released from deleting the old package, and let the update cancel before it begins if there is not enough space. Then the user would have an opportunity to free up some space with whatever method they think is appropriate, without Pacman crashing partway through and potentially wrecking the install.

The scope of what this MR would be is significantly beyond my extremely rudimentary abilities, but later today I can at least log in and leave a comment on that thread. Maybe if enough of us chime in and express support for it, it will revive the discussion and put it back on the radar for the devs.


I created an account to participate in the new Arch bug tracker system thinking that would allow me to chime in on the issue. After sending the email (I got a response within a half hour), I had to set up MFA for a login, and then finally use the password reset tool to set a password so I could log in.

Unfortunately, after all that I discovered having a login on the new site does not let you log in on the old site. :unamused:

I decided to re-open the issue on the new site, to hopefully refresh it and announce there is still community interest in having this resolved. After first raising the issue in the wrong part of the site (I put it in Arch Linux/Packages/pacman/Issues which is for packaging related issues :expressionless:) I have now raised the new issue here:

If anyone is willing to go through the bother of setting up an account for the Arch bug tracker, I would encourage you to add your comments to the thread or give it a thumbs-up or whatever.


Due to an influx of spam, we have had to temporarily disable account registrations. Please write an email to [email protected], with your desired username, if you want to get access. Sorry for the inconvenience.

Yes, that’s right–you have to send the email with the username you want, then someone will reply with a link you can use to set up the MFA.

Hello guys,

I spent some time to write a bash script that deletes up to half of the snapshots (of the snapper root configuration) when just the free data space (without metadata, System and Co) is insufficient with a margin of 20MB in comparison to the sum of updated package sizes.

I tested the while loop with random number dummies for the space computations. The space computation works fine for me.

I assumed, cache and all updated packages are lying in the btrfs partition associated with the “updatedMountpoint” so that I only compute the difference between total and used data space.

I also assumed that the snapshots in the snapper list are sorted by age, the first one being the oldest one. I assumed, deleting a snapshot takes immediately effect so that the recomputation of available data space changes.
And I assumed running as root.

computeMemoryExpression() {
    echo "$*" | sed -E -e 's/GiB/*1024*1024*1024/g' -e 's/MiB/*1024*1024/g' -e 's/KiB/*1024/g' | bc
computeAvailableSpace() {
    computeMemoryExpression $(btrfs filesystem df "$updatedMountpoint" | grep 'Data, single:' | cut -d' ' -f3- | sed -E 's/total=(.*?),.*? used=(.*?)/\1-\2/')
computeRequiredSpace() {
    # alternative: checkupdates (slow!)
    computeMemoryExpression $(pacman -Qu | cut -d' ' -f1 | xargs pacman -Si | grep 'Installed Size' | cut -d':' -f2 | tr '\n' '+' | tr ',' '.') 0

# pacman -Sy should be automatically called when the system is updated


if (( ${availableSpace%.*} >= (${requiredSpace%.*} + 20000000) )); # 20MB margin
then echo "enough space available ${availableSpace%.*} > required space $((${requiredSpace%.*} + 20000000))";
    snapshotNumbers=$(snapper list | grep '^[[:digit:]]' | cut -d' ' -f1 | tr '\n' ' ');
    toRemove=$(( $(wc -w <<< $snapshotNumbers) / 2 ));

    while (( ${availableSpace%.*} < (${requiredSpace%.*} + 20000000) )) && (( toRemove-- > 0 ));
        echo "removing snapshot ${snapshotNumbers%% *}";
        snapper -c root delete ${snapshotNumbers%% *};
        snapshotNumbers=${snapshotNumbers#* };


    if (( ${availableSpace%.*} < (${requiredSpace%.*} + 20000000) )); then
        echo "Still not enough space ${availableSpace%.*} for system update $((${requiredSpace%.*} + 20000000))!! You might stop the process!" >&2;

Needs bc (basic calculater).

Unfortunately, I cannot invest much more time. I am not sure how to generalize it to other configs and how to put this into one command to execute in a pacman hook.

I have already created the other hook which cleans the paccache.

Maybe, this script could be useful to someone even though it is quite specific regarding the mount point and the snapper configuration.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.