No Space Left on Device, but space available

Thanks, ran a check. No errors.

1 Like

So I’ve added space, 7GB free. Tried to run garuda-update & I’m still getting errors “no space left on device” in the garuda-update. What is eating up this unknown space?

Did you expand the filesystem into the space you added?

How are you determining how much free space there is? Check this output:

sudo btrfs filesystem usage /

See also here: https://wiki.archlinux.org/title/Btrfs#Displaying_used.2Ffree_space

3 Likes

I did expand it into the space I added.

Able to get into my system without LiveUSB, but wayland crashes. Ran garuda-update from the command line. And get no space on drive errors as it runs. Pacman runs fine, it’s just scripts that garuda-update runs that give this error. Many had the /tmp directory starting the files they were altering.

The btrfs filesystem usage / shows:
4.37GB free
Metadata is at (67.62% used)
System, DUP at (0.05% used)

What does this mean? Post the exact error you are seeing into the thread. This is too vague to be meaningful.

What do the errors exactly say? Paste the full input and output into the thread so we can see the context of the errors as well.

What does this mean? What Pacman commands did you run? Was Pacman ever not running fine? What conclusion are you drawing here?

What scripts? What is the error?

Do you have /tmp mounted on tmpfs or on the disk?

There should be a lot more output than that.

In general, there is just not a lot of information here so it’s hard to tell what’s happening. It sounds like your system may have some other problems besides the disk being full.

5 Likes

Wayland error is : plasmashell (plasmashell), signal: Segmentation fault
from: /usr/lib/libc.so.6

Pacman synchronizing, package downloading & updating work as normal.
It is only scripts that error our during the garuda-update

There is more information, but since plasma doesn’t come up, I can’t capture the filesystem usage info easily, why I typed what I thought was relevant.

My system issues are due to the partial update that crashed which kicked off this whole chain of events. The common thread though is, it saying disk full when I clearly have disk according to all measures.

As far as I know /tmp is mounted on the disk.

I did a scrub & balance again. Rebooted & ran garuda-update & it completed with no out of disk errors. Rebooted & system is back up.

While I’ve solved my problem, the question still remains, where is the discrepancy between the open disk listed & the system thinking it’s full? I’m trying to learn from this experience, so next time I can just fix it without hitting the forums.

the system thinking its full?

Maybe that can give you some knowlege =)

We are used to thinking about a directory containing files. This is really an illusion. Directories do not contain files. The data of the files is not stored in the directory.

1 Like

@slfaber
Where are going with this? Is this just to test? On how small you can make it :smiley:
I have a feeling garuda is not your daily driver :smile:
If it is stop strangulating it :rofl:

4 Likes

It is my daily driver. And no I’m not trying to see how small I can make it, I’ve ran with 30GB for over a year now. Having a distribution just suddenly need 20% more drive to function isn’t something someone should just add some more disk & move on. I really want to understand the system I’m using, so when it demands more, I would like to be able to answer why.

This has nothing to do with the distribution but rather with the file system chosen. I recommend that you take a look at BTRFS and Snapper and learn how and why it works.

Either you use a different file system or you enlarge the root partition, otherwise you will soon run into exactly the same error again.

Many users forget an important point when it comes to operating systems: the file system.
Each has advantages and disadvantages - one is better suited for this user case, the other for that user case.
If your user case is to run Linux on a 30GB partition, then BTRFS might be the wrong choice.

But that’s your decision :wink:

2 Likes

My old installation procedure when using ext4 as my filesystem was to use a dedicated 64GB SSD for Arch and its derivatives. I found this plenty spacious when using the ext4 file system.

After switching to Garuda with the BTRFS file system, I quickly realized that a 64 GB drive would be too cramped when using the BTRFS file system. I now use a dedicated 128GB SSD for all BTRFS installations. YMMV, but I think you’re really pushing things with so small a system partition for Garuda.

3 Likes

I think 30 GB is the absolute minimum, but I only run it for testing purposes (ok, 35 GB).
The more software I use, the larger I should estimate the memory requirement, with 128 GB as tbg says you should get by for quite a long time. The “tidying up” is done by snapper.

2 Likes

If you use ext4 instead of btrfs, 30 GB may be just enough. But then you always have to keep an eye on the pacman cache, for example, and keep your system tidy. At least if you don’t use a bare system, but also have a few programs installed.
I use ext4 and therefore of course no snapper. Root and home are separate and there are 1664 packages installed. Of the 50GB for the root partition, just under 22 GB is occupied. I worked with 30GB for root for a long time, but now I don’t like constantly cleaning up and having to keep an eye on everything. It is extremely annoying to have to cut off a few GB to get root running again. With today’s drive sizes and prices, 20 GB more is as good as nothing and makes life considerably easier and less stressful.

2 Likes

My guess is that some background or failed/escaped process was looping writing, or something similar. Maybe it was a bug somewhere, or a failed pacman activity that left things in a bad state.
In order to find the real reason, you have to investigate the relevant journal logs of the time.
Also, check your file system size occupation with something like baobab. Look for large file, or folders.

2 Likes