Cannot boot anymore (root full, snapshot not working)

That’s the spirit! *tips hat*

Try following the steps outlined in this article: https://wiki.archlinux.org/title/Pacman#Pacman_crashes_during_an_upgrade

It gets a little tricky if the Pacman libraries are messed up or missing and you have to do the whole pacman-static bit, but if you stick with it you have a good chance of fixing it (or learning something :wink:).

For what it’s worth, I have seen this process succeed. For example here: Crash during os update - issues repairing using arch-chroot - #7 by Beinje - Pacman - EndeavourOS

2 Likes

Thanks! I’ll try, finger crossed. :nerd_face:

BTW, what about the Garuda tool “Reinstall all packages”?

you can certainly try that,

garuda-update remote fullfix

however if pacman’s messed up… eh I don’t know. But certainly worth a try.

1 Like

It seemed to work for a while, until…

error: problem occurred while upgrading gst-plugins-bad-libs
error: could not commit transaction
error: failed to commit transaction (transaction aborted)
Errors occurred, no packages were upgraded.

:smiling_face_with_tear:

Unfortunately it gets stopped at the second passage:

sh-5.2# mount -t proc proc /mnt/proc
mount: /mnt/proc: mount point does not exist.
dmesg(1) may have more information after failed mount system call.

The reinstall is getting closer and closer…

paste full logs maybe we can see what went wrong.

/var/log/garuda/garuda-update

from chroot env.

do look at dmesg and what it says about the failed mounting.

Unfortunatly it seems that the previous try completely filled the root partition, so I cannot try anymore to give you complete logs (I had to reboot in the meantime). :frowning:
I already tried to pacman -Ssc to clear the caches, not resulting with any free space.

And trying to get the log anyways gives me a : Permission denied.

This is the situation:

sh-5.2# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        47G   46G     0 100% /
udev            7.8G     0  7.8G   0% /dev
shm             7.8G     0  7.8G   0% /dev/shm
run             7.8G     0  7.8G   0% /run
tmp             7.8G  4.4M  7.8G   1% /tmp
overlay          12G  329M   12G   3% /etc/resolv.conf

Can I try and delete the tmp to free some space and try again?

You can but that will just give you back 4 megs of disk space and I suspect you need quite a bit more than that.

How about resizing the disk and putting the new UUID (changing partitions causes their UUID to change) in /etc/fstab. Ofc the fstab entries have to be updated in chroot mode so do check if you can open the file and make modifications and save them or not. Otherwise the entire resizing the disk would be pointless.

Can you try,

sudo paccache -rk0

I suspect the reason why your operation failed is most likely due to the limited disk size. Oh btw, you didn’t have to run the garuda-update command again to produce the logs the file /var/logs/garuda/garuda-update should have them. Hopefully.

Thanks NaN, this is the content of the log file:

>-<->-< garuda-update at 2024-04-12 14:20 CEST(+02)


--> Refreshing mirrorlists using rate-mirrors, please be patient.. ^=^m

:: Synchronizing package databases...
garuda downloading...
core downloading...
extra downloading...
multilib downloading...
chaotic-aur downloading...
spawn pacman -Su
^[[?25l:: Starting full system upgrade...
resolving dependencies...
looking for conflicting packages...

Packages (6) lutris-0.5.17-1  ugrep-5.1.4-1  vulkan-swrast-1:24.0.5-1  webkit2gtk-2.44.1-1  >

Total Download Size:    60.64 MiB
Total Installed Size:  238.27 MiB
Net Upgrade Size:        0.16 MiB

:: Proceed with installation? [Y/n] ^[[?25hy
warning: no /var/cache/pacman/pkg/ cache exists, creating...
^[[?25l:: Retrieving packages...
webkit2gtk-4.1-2.44.1-1-x86_64
:: Running pre-transaction hooks...
(1/1) Performing snapper pre snapshots for the following configurations...
fatal library error, lookup self
pre-num 635 for post-num 636 does not exist
==> root: 645
:: Processing package changes...
:: Running post-transaction hooks...
(1/9) Arming ConditionNeedsUpdate...
(2/9) Refreshing PackageKit...
Error connecting: Could not connect: No such file or directory
error: command failed to execute correctly
(3/9) Foreign/AUR package notification
khotkeys 5.27.10-1
kwin-scripts-forceblur 0.6.1-1.3
plasma5-applets-eventcalendar 76-1.4
stacer 1.1.0-1.4
thorium-browser-bin 120.0.6099.235-2
wordgrinder 0.8-1
(4/9) Orphaned package notification...
appstream-qt5 1.0.2-1
prison5 5.115.0-1
(5/9) Checking for .pacnew and .pacsave files...
.pac* files found:
/etc/passwd.pacnew
/etc/shells.pacnew
/etc/locale.gen.pacnew
/etc/pam.d/kde.pacnew
/etc/pacman.conf.pacnew
/etc/pacman.d/mirrorlist.pacnew
Please check and merge
(6/9) Updating icon theme caches...
(7/9) Updating the desktop file MIME type cache...
(8/9) Performing snapper post snapshots for the following configurations...
fatal library error, lookup self
pre-num 635 for post-num 636 does not exist
==> root: 646
(9/9) Syncing all file systems...
^[[?25hRunning in chroot, ignoring command 'start'
Running in chroot, ignoring command 'start'
Failed to connect to bus: No such file or directory
Failed to start transient service unit: Transport endpoint is not connected
>-<->-< garuda-update at 2024-04-12 14:21 CEST(+02)


--> Refreshing mirrorlists using rate-mirrors, please be patient.. ^=^m

:: Synchronizing package databases...
garuda downloading...
core downloading...
extra downloading...
multilib downloading...
chaotic-aur downloading...
spawn pacman -Su
^[[?25l:: Starting full system upgrade...
there is nothing to do
^[[?25hRunning in chroot, ignoring command 'start'
Running in chroot, ignoring command 'start'
Failed to connect to bus: No such file or directory
Failed to start transient service unit: Transport endpoint is not connected

>-<->-< garuda-update at 2024-04-12 14:29 CEST(+02)


--> Refreshing mirrorlists using rate-mirrors, please be patient.. ^=^m

:: Synchronizing package databases...
garuda downloading...
core downloading...
extra downloading...
multilib downloading...
chaotic-aur downloading...
spawn pacman -Su
^[[?25l:: Starting full system upgrade...
there is nothing to do
^[[?25hRunning in chroot, ignoring command 'start'
Running in chroot, ignoring command 'start'
Failed to connect to bus: No such file or directory
Failed to start transient service unit: Transport endpoint is not connected
>-<->-< garuda-update at 2024-04-12 14:37 CEST(+02)


--> Refreshing mirrorlists using rate-mirrors, please be patient.. ^=^m

:: Synchronizing package databases...
garuda downloading...
core downloading...
extra downloading...
multilib downloading...
chaotic-aur downloading...
spawn pacman -Su
^[[?25l:: Starting full system upgrade...
there is nothing to do
^[[?25hRunning in chroot, ignoring command 'start'
Running in chroot, ignoring command 'start'
Failed to connect to bus: No such file or directory
Failed to start transient service unit: Transport endpoint is not connected

>-<->-< garuda-update at 2024-04-12 14:45 CEST(+02)


--> Refreshing mirrorlists using rate-mirrors, please be patient.. ^=^m

:: Synchronizing package databases...
garuda downloading...
core downloading...
extra downloading...
multilib downloading...
chaotic-aur downloading...
spawn pacman -Su
^[[?25l:: Starting full system upgrade...
there is nothing to do
^[[?25hRunning in chroot, ignoring command 'start'
Running in chroot, ignoring command 'start'
Failed to connect to bus: No such file or directory
Failed to start transient service unit: Transport endpoint is not connected

I think it shows even the latest tries.

Here’s the result:

sh-5.2# sudo paccache -rk0
==> no candidate packages found for pruning

Can I use the graphical installer or the disk utility in the live environment to resize? I’d erase the 16 Gb swap, make /home smaller and enlarge /
Then I’d have to edit the text file /etc/fstab and manually substitute the new UUIDS?

Btw, for testing as you suggested, I tried editing and saving fstab from root and… it refuses to save because there’s no disk space! :roll_eyes:

One last thing: if I unmount the root and try to resize (deleting swap, downsizing home) it seems I cannot enlarge the root… maybe because the free space is not consecutive? :thinking:

well that’s a bummer it seems the logs don’t even have the error: message…

a yes to all of those.

yikes… but I guess we can always try with more disk space at hand.

Honestly I hope someone with a bit more experience can answer this I haven’t ever needed to perform disk resizing so I can’t be sure that’s the reason. I am merely forwarding whatever knowledge I have seen on the forum.

1 Like

Thanks you so much again, @NaN! :pray:
I hope someone can help me with the resizing, so I can do this last attempt before nuking the system. :crossed_fingers:

EDIT: maybe I figured it out, I needed to gradually nove the unallocated space towards the unmounted root partition, now it seemingly is slowly moving and resizing… :crossed_fingers:

NOPE.
Another fail. :weary:

Delete partition ‘/dev/sda4’ (16.60 GiB, linuxswap) 
Job: Delete file system on ‘/dev/sda4’ 
Command: wipefs --all /dev/sda4 
Delete file system on ‘/dev/sda4’: Success

Job: Delete the partition ‘/dev/sda4’ 
Command: sfdisk --force --delete /dev/sda 4 
Delete the partition ‘/dev/sda4’: Success
Delete partition ‘/dev/sda4’ (16.60 GiB, linuxswap): Success

Move partition ‘/dev/sda3’ to the right by 16.60 GiB 
Job: Check file system on partition ‘/dev/sda3’ 
Command: btrfs check --repair /dev/sda3 
Check file system on partition ‘/dev/sda3’: Success

Job: Set geometry of partition ‘/dev/sda3’: Start sector: 1,083,394,048, length: 870,125,568 
Command: sfdisk --force /dev/sda -N 3 
Set geometry of partition ‘/dev/sda3’: Start sector: 1,083,394,048, length: 870,125,568: Success

Job: Move the file system on partition ‘/dev/sda3’ to sector 1,083,394,048 
Copying 42,486 chunks (445,504,290,816 bytes) from 554,697,752,576 to 554,697,752,576, direction: left. 

Copying 170 MiB/second, estimated time left: 00:39:47 

Copying 164 MiB/second, estimated time left: 00:38:47 

Copying 156 MiB/second, estimated time left: 00:38:32 

Copying 151 MiB/second, estimated time left: 00:37:26 

Copying 149 MiB/second, estimated time left: 00:35:30 

Copying 148 MiB/second, estimated time left: 00:33:26 

Copying 147 MiB/second, estimated time left: 00:31:09 

Copying 148 MiB/second, estimated time left: 00:28:37 

Copying 147 MiB/second, estimated time left: 00:26:21 

Copying 150 MiB/second, estimated time left: 00:23:33 

Copying 152 MiB/second, estimated time left: 00:20:51 

Copying 154 MiB/second, estimated time left: 00:18:17 

Copying 156 MiB/second, estimated time left: 00:15:49 

Copying 158 MiB/second, estimated time left: 00:13:25 

Copying 159 MiB/second, estimated time left: 00:11:05 

Copying 160 MiB/second, estimated time left: 00:08:48 

Copying 161 MiB/second, estimated time left: 00:06:33 

Copying 162 MiB/second, estimated time left: 00:04:21 

Copying 163 MiB/second, estimated time left: 00:02:09 

Copying 163 MiB/second, estimated time left: 00:00:00 

Copying remainder of chunk size 6,291,456 from 1,000,195,751,936 to 1,000,195,751,936. 

Copying 42,486 chunks (445,504,290,816 bytes) finished. 

Closing device. This may take a few seconds. 
Move the file system on partition ‘/dev/sda3’ to sector 1,083,394,048: Success

Job: Check file system on partition ‘/dev/sda3’ 
Command: btrfs check --repair /dev/sda3 
Check file system on partition ‘/dev/sda3’: Error

Checking partition ‘/dev/sda3’ after resize/move failed. 
Move partition ‘/dev/sda3’ to the right by 16.60 GiB: Error

I think maybe I’m done this time.

That’s sad… :slightly_frowning_face: I guess the only option left is reinstall…

I guess so.
I’m trying as a last attempt to expand the root using only the free space left by swap.
That should gain me 20 Gb approx. - if it goes well I’ll try and

garuda-update remote fullfix

again.
If something fails: it’s reinstall time. :smiling_face_with_tear:

Thanks again to everybody who helped! :pray:

The root resizing worked.
But.
It seems the previous error nuked my /home partition.


So even worse than I believed.
A really, really sad finale.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.