System keeps booting into snapshot after restoring

Hello dear Garuda Users,

i recently stumbeld over a line in the output of garuda-update that read:

(1/1) Performing snapper pre snapshots for the following configurations...
IO Error (.snapshots is not a btrfs subvolume).

After a bit of digging i found out that i seem to still be booted into the snapshot i tried to restore about half a year ago. While trying to resolve this issue i only created more and more snapshots inside the snapshot but could not restore the system to not boot a snapshot.

garuda-inxi

╭─jarnek@jarnek in ~ as 🧙 took 2s
╰─λ sudo garuda-inxi
System:
Kernel: 6.5.5-zen1-1-zen arch: x86_64 bits: 64 compiler: gcc v: 13.2.1
clocksource: tsc available: hpet,acpi_pm
parameters: BOOT_IMAGE=/restore_backup_@_202859658/boot/vmlinuz-linux-zen
root=UUID=1b45e05e-bac7-4f7b-940e-4f41c6374d09 rw
rootflags=subvol=restore_backup_@_202859658 quiet quiet
rd.udev.log_priority=3 vt.global_cursor_default=0 loglevel=3 ibt=off
Desktop: KDE Plasma v: 5.27.8 tk: Qt v: 5.15.10 wm: kwin_x11 dm: SDDM
Distro: Garuda Linux base: Arch Linux
Machine:
Type: Desktop Mobo: Micro-Star model: MAG X570S TORPEDO MAX (MS-7D54) v: 1.0
serial: <filter> UEFI: American Megatrends LLC. v: A.00 date: 07/09/2021
CPU:
Info: model: AMD Ryzen 7 5800X socket: AM4 bits: 64 type: MT MCP
arch: Zen 3+ gen: 4 level: v3 note: check built: 2022 process: TSMC n6 (7nm)
family: 0x19 (25) model-id: 0x21 (33) stepping: 0 microcode: 0xA201016
Topology: cpus: 1x cores: 8 tpc: 2 threads: 16 smt: enabled cache:
L1: 512 KiB desc: d-8x32 KiB; i-8x32 KiB L2: 4 MiB desc: 8x512 KiB
L3: 32 MiB desc: 1x32 MiB
Speed (MHz): avg: 4275 min/max: 2200/5456 boost: enabled
base/boost: 4275/4850 scaling: driver: acpi-cpufreq governor: performance
volts: 1.1 V ext-clock: 100 MHz cores: 1: 4275 2: 4275 3: 4275 4: 4275
5: 4275 6: 4275 7: 4275 8: 4275 9: 4275 10: 4275 11: 4275 12: 4275
13: 4275 14: 4275 15: 4275 16: 4275 bogomips: 136803
Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
Vulnerabilities: <filter>
Graphics:
Device-1: NVIDIA GA104 [GeForce RTX 3060 Ti] driver: nvidia v: 535.113.01
alternate: nouveau,nvidia_drm non-free: 535.xx+
status: current (as of 2023-09) arch: Ampere code: GAxxx
process: TSMC n7 (7nm) built: 2020-23 pcie: gen: 4 speed: 16 GT/s
lanes: 16 bus-ID: 2d:00.0 chip-ID: 10de:2486 class-ID: 0300
Display: server: X.Org v: 21.1.8 with: Xwayland v: 23.2.1
compositor: kwin_x11 driver: X: loaded: nvidia gpu: nvidia display-ID: :0
screens: 1
Screen-1: 0 s-res: 3840x1080 s-dpi: 81 s-size: 1204x343mm (47.40x13.50")
s-diag: 1252mm (49.29")
Monitor-1: Unknown-1 mapped: DP-2 res: 3840x1080 hz: 144 dpi: 82
size: 1196x336mm (47.09x13.23") modes: 3840x1080
API: EGL v: 1.5 hw: drv: nvidia platforms: gbm: drv: nvidia
API: OpenGL v: 4.6.0 vendor: nvidia v: 535.113.01 glx-v: 1.4
direct-render: yes renderer: NVIDIA GeForce RTX 3060 Ti/PCIe/SSE2
memory: 7.81 GiB
API: Vulkan v: 1.3.264 layers: 4 device: 0 type: discrete-gpu name: NVIDIA
GeForce RTX 3060 Ti driver: nvidia v: 535.113.01 device-ID: 10de:2486
surfaces: xcb,xlib
Audio:
Device-1: NVIDIA GA104 High Definition Audio driver: snd_hda_intel v: kernel
pcie: gen: 4 speed: 16 GT/s lanes: 16 bus-ID: 2d:00.1 chip-ID: 10de:228b
class-ID: 0403
Device-2: AMD Starship/Matisse HD Audio vendor: Micro-Star MSI
driver: snd_hda_intel v: kernel pcie: gen: 4 speed: 16 GT/s lanes: 16
bus-ID: 2f:00.4 chip-ID: 1022:1487 class-ID: 0403
Device-3: Corsair VIRTUOSO Wireless Gaming Headset
driver: hid-generic,snd-usb-audio,usbhid type: USB rev: 2.0 speed: 12 Mb/s
lanes: 1 mode: 1.1 bus-ID: 1-5:3 chip-ID: 1b1c:0a42 class-ID: 0300
serial: <filter>
Device-4: Micro Star USB Audio driver: hid-generic,snd-usb-audio,usbhid
type: USB rev: 2.0 speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 3-5:2
chip-ID: 0db0:a073 class-ID: 0300
API: ALSA v: k6.5.5-zen1-1-zen status: kernel-api tools: N/A
Server-1: PipeWire v: 0.3.80 status: n/a (root, process) with:
1: pipewire-pulse status: active 2: wireplumber status: active
3: pipewire-alsa type: plugin 4: pw-jack type: plugin
tools: pactl,pw-cat,pw-cli,wpctl
Network:
Device-1: Realtek RTL8125 2.5GbE vendor: Micro-Star MSI driver: r8169
v: kernel pcie: gen: 2 speed: 5 GT/s lanes: 1 port: e000 bus-ID: 26:00.0
chip-ID: 10ec:8125 class-ID: 0200
IF: enp38s0 state: down mac: <filter>
Device-2: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet
vendor: Micro-Star MSI driver: r8169 v: kernel pcie: gen: 1 speed: 2.5 GT/s
lanes: 1 port: d000 bus-ID: 28:00.0 chip-ID: 10ec:8168 class-ID: 0200
IF: eno1 state: down mac: <filter>
Device-3: Qualcomm Atheros AR9271 802.11n driver: ath9k_htc type: USB
rev: 2.0 speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 5-3:3 chip-ID: 0cf3:9271
class-ID: ff00 serial: <filter>
IF: wlp47s0f3u3 state: up mac: <filter>
Device-4: Realtek RTL8812AU 802.11a/b/g/n/ac 2T2R DB WLAN Adapter
driver: rtl88XXau type: USB rev: 2.0 speed: 480 Mb/s lanes: 1 mode: 2.0
bus-ID: 5-4:5 chip-ID: 0bda:8812 class-ID: 0000 serial: <filter>
IF: wlp47s0f3u4 state: up mac: <filter>
IF-ID-1: br-39e90abfd8e6 state: up speed: 10000 Mbps duplex: unknown
mac: <filter>
IF-ID-2: br-7d937e02dbd4 state: down mac: <filter>
IF-ID-3: docker0 state: down mac: <filter>
IF-ID-4: veth8352a74 state: up speed: 10000 Mbps duplex: full
mac: <filter>
IF-ID-5: vethd33b60b state: up speed: 10000 Mbps duplex: full
mac: <filter>
IF-ID-6: vmnet1 state: unknown speed: N/A duplex: N/A mac: <filter>
IF-ID-7: vmnet8 state: unknown speed: N/A duplex: N/A mac: <filter>
Drives:
Local Storage: total: 3.87 TiB used: 855.36 GiB (21.6%)
ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Western Digital model: WD BLACK
SN850X HS 1000GB size: 931.51 GiB block-size: physical: 512 B
logical: 512 B speed: 63.2 Gb/s lanes: 4 tech: SSD serial: <filter>
fw-rev: 620281WD temp: 33.9 C scheme: GPT
SMART: yes health: PASSED on: 71d 7h cycles: 479
read-units: 74,559,631 [38.1 TB] written-units: 42,350,419 [21.6 TB]
ID-2: /dev/sda maj-min: 8:0 vendor: SanDisk model: SSD PLUS 1000GB
family: Marvell based SSDs size: 931.52 GiB block-size: physical: 512 B
logical: 512 B sata: 3.2 speed: 6.0 Gb/s tech: SSD serial: <filter>
fw-rev: 00RL temp: 24 C scheme: GPT
SMART: yes state: enabled health: PASSED on: 217d 15h cycles: 1219
ID-3: /dev/sdb maj-min: 8:16 vendor: Seagate model: ST1000LM048-2E7172
family: Barracuda 2.5 5400 size: 931.51 GiB block-size: physical: 4096 B
logical: 512 B sata: 3.1 speed: 6.0 Gb/s tech: HDD rpm: 5400
serial: <filter> fw-rev: SDM1 temp: 25 C scheme: GPT
SMART: yes state: enabled health: PASSED on: 368d 9h cycles: 2330
read: 18.86 TiB written: 17.92 TiB Pre-Fail: attribute: Spin_Retry_Count
value: 100 worst: 100 threshold: 97
ID-4: /dev/sdc maj-min: 8:32 vendor: Intenso model: SSD Sata III
size: 236 GiB block-size: physical: 512 B logical: 512 B sata: 3.1
speed: 6.0 Gb/s tech: SSD serial: <filter> fw-rev: 3B temp: 29 C
scheme: GPT
SMART: yes state: enabled health: PASSED on: 236d 13h cycles: 3157
read: 347.8 MiB written: 395.4 MiB
ID-5: /dev/sdd maj-min: 8:48 vendor: Seagate model: ST1000LM048-2E7172
family: Barracuda 2.5 5400 size: 931.51 GiB block-size: physical: 4096 B
logical: 512 B sata: 3.1 speed: 6.0 Gb/s tech: HDD rpm: 5400
serial: <filter> fw-rev: SDM1 temp: 26 C scheme: GPT
SMART: yes state: enabled health: PASSED on: 367d 2h cycles: 2327
read: 2.87 TiB written: 3.95 TiB Pre-Fail: attribute: Spin_Retry_Count
value: 100 worst: 100 threshold: 97
Partition:
ID-1: / raw-size: 931.01 GiB size: 931.01 GiB (100.00%)
used: 855.36 GiB (91.9%) fs: btrfs block-size: 4096 B dev: /dev/nvme0n1p2
maj-min: 259:2
ID-2: /boot/efi raw-size: 512 MiB size: 511 MiB (99.80%)
used: 576 KiB (0.1%) fs: vfat block-size: 512 B dev: /dev/nvme0n1p1
maj-min: 259:1
ID-3: /home raw-size: 931.01 GiB size: 931.01 GiB (100.00%)
used: 855.36 GiB (91.9%) fs: btrfs block-size: 4096 B dev: /dev/nvme0n1p2
maj-min: 259:2
ID-4: /var/log raw-size: 931.01 GiB size: 931.01 GiB (100.00%)
used: 855.36 GiB (91.9%) fs: btrfs block-size: 4096 B dev: /dev/nvme0n1p2
maj-min: 259:2
ID-5: /var/tmp raw-size: 931.01 GiB size: 931.01 GiB (100.00%)
used: 855.36 GiB (91.9%) fs: btrfs block-size: 4096 B dev: /dev/nvme0n1p2
maj-min: 259:2
Swap:
Kernel: swappiness: 133 (default 60) cache-pressure: 100 (default) zswap: no
ID-1: swap-1 type: zram size: 31.27 GiB used: 20.2 MiB (0.1%)
priority: 100 comp: zstd avail: lzo,lzo-rle,lz4,lz4hc,842 max-streams: 16
dev: /dev/zram0
Sensors:
System Temperatures: cpu: 34.8 C mobo: N/A gpu: nvidia temp: 44 C
Fan Speeds (rpm): N/A gpu: nvidia fan: 0%
Info:
Processes: 440 Uptime: 15m wakeups: 0 Memory: total: 32 GiB
available: 31.27 GiB used: 6.21 GiB (19.9%) Init: systemd v: 254
default: graphical tool: systemctl Compilers: gcc: 13.2.1 alt: 12
clang: 16.0.6 Packages: pm: pacman pkgs: 2534 libs: 493
tools: octopi,paru,yay pm: flatpak pkgs: 0 Shell: garuda-inxi (sudo)
default: Bash v: 5.1.16 running-in: konsole inxi: 3.3.30
Garuda (2.6.16-1):
System install date:     2023-08-04
Last full system update: 2023-10-01
Is partially upgraded:   No
Relevant software:       snapper NetworkManager mkinitcpio nvidia-dkms
Windows dual boot:       Yes
Failed units:            nvidia-powerd.service

╭─jarnek@jarnek in ~ as 慄 took 2m30s
╰─λ sudo btrfs sub list /
ID 256 gen 179125 top level 5 path restore_backup_restore_backup_@_202859658_213948730_backup_2023240720365717
9
ID 257 gen 179141 top level 5 path @home
ID 258 gen 179125 top level 5 path @root
ID 259 gen 125975 top level 5 path @srv
ID 260 gen 179138 top level 5 path @cache
ID 261 gen 179141 top level 5 path @log
ID 262 gen 179139 top level 5 path @tmp
ID 263 gen 179125 top level 283 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots
ID 264 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/1/snapshot
ID 265 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/2/snapshot
ID 266 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/3/snapshot
ID 267 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/4/snapshot
ID 268 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/5/snapshot
ID 269 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/6/snapshot
ID 270 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/7/snapshot
ID 271 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/8/snapshot
ID 272 gen 152295 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/9/snapshot
ID 273 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/10/snapshot
ID 274 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/11/snapshot
ID 275 gen 127433 top level 263 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944
376/.snapshots/12/snapshot
ID 276 gen 170927 top level 5 path @
ID 277 gen 179141 top level 5 path restore_backup_@_202859658
ID 278 gen 127433 top level 5 path restore_backup_restore_backup_@_202859658_213948730_backup_2023240721071801
0
ID 279 gen 127433 top level 5 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_21094437
6_backup_20232407212337461_testing_this_shit
ID 280 gen 127433 top level 5 path restore_backup_restore_backup_@_202859658_213948730
ID 281 gen 127433 top level 5 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_21094437
6_backup_20232407212712363
ID 282 gen 152298 top level 5 path restore_backup_restore_backup_restore_backup_restore_backup_@_202859658_213
948730_210944376_183107159
ID 283 gen 152296 top level 5 path restore_backup_restore_backup_restore_backup_@_202859658_213948730_21094437
6
╭─jarnek@jarnek in ~ as 🧙 took 9ms
╰─λ sudo findmnt --real
TARGET SOURCE   FSTYPE OPTIONS
/      /dev/nvme0n1p2[/restore_backup_@_202859658]
btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=277,sub
├─/run/user/1000/doc
│      portal   fuse.p rw,nosuid,nodev,relatime,user_id=1000,group_id=1001
├─/home
│      /dev/nvme0n1p2[/@home]
│               btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=257,sub
├─/root
│      /dev/nvme0n1p2[/@root]
│               btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=258,sub
├─/srv /dev/nvme0n1p2[/@srv]
│               btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=259,sub
├─/var/cache
│      /dev/nvme0n1p2[/@cache]
│               btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=260,sub
├─/var/log
│      /dev/nvme0n1p2[/@log]
│               btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=261,sub
├─/var/tmp
│      /dev/nvme0n1p2[/@tmp]
│               btrfs  rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=262,sub
└─/boot/efi
/dev/nvme0n1p1
vfat   rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,err

Any help would be greatly appreciated

You may have overwritten your Grub configuration from the snapshot somehow. If you boot to the live ISO, mount the correct subvolumes in a chroot, reinstall Grub, and regenerate the Grub configuration file, that may get you back on track.

This process is detailed here:

If you do not explicitly mount @ at / (for example you use the chroot tool), make sure the @ is being mounted and not another subvolume before you begin.

Keep in mind, if this works you may be booting to a system that is very out of date. Be sure to update with the garuda-update script to avoid having to deal with package conflicts and other interventions.

1 Like

This user doesn’t even have an @ anymore.

My advice would be to rename the snapshot you have been booting into for many months @, update /etc/fstab and run update-grub

It’s showing up a little further down than the other top-level subvolumes for some reason:

I see it now. Even still, I would rename the current one to @.

I don’t see any reason to boot into an ancient subvol just because it has the correct name.

2 Likes

Good point, that’s fair enough. That would avoid the whole issue of potentially dragging the system six months into the past.

What about the original error though?

Do you think it’s just a simple matter of setting up a new subvolume for .snapshots, or something else?

No, it is because it is in the wrong place.

I would fix the subvol naming issue first.

Subvol 263 is .snapshots.

Once everything is named correctly that subvol will need to moved inside the other one.

Hey Guys,

First of all thank you for your responses.
Could you explain how i can progress from here? I am pretty new to btrfs in general

It’s up to you how you want to proceed. You can restore the original subvolumes by following these instructions:

A benefit of this method is it is simple and well-documented.

A drawback is you will restore the original root subvolume, which sounds like it may be out of date. You will lose any changes that have been made to the root subvolume in the meanwhile (packages added or removed, config files changed, etc).

With this second point in mind, another path forward is suggested here:

If you are not sure how to do this, here is one possible way:

Create a mount point outside the top-level subvolumes. Mount subvolid=0 at this mount point, then move to the new directory.

sudo mkdir /mnt/top-level_subvolume
sudo mount -o subvolid=0 /dev/nvme0n1p2 /mnt/top-level_subvolume
cd /mnt/top-level_subvolume

Rename the old @ subvolume (so you can reuse the name).

sudo mv @ @_old

Rename the subvolume you are booted into, so now it will be @.

sudo mv restore_backup_@_202859658 @

Next you will need to fix your fstab file. The easiest way I can think of will be to grab the “original” one and paste it in to your new root subvolume.

First move the existing file to backup.

sudo mv /etc/fstab /etc/fstab.bak

Then copy over the “good” fstab:

sudo cp @_old/etc/fstab @/etc/fstab

Now re-mount with the new fstab.

sudo systemctl daemon-reload
sudo mount -a

If you get any errors when re-mounting, you should stop and fix the problem before continuing on. If fstab is wrong you may not be able to boot.

When you are ready, regenerate the Grub configuration file.

sudo update-grub

Afterward, you should be able to reboot.

Before you do, you may as well get this out of the way:

You can mv this subvolume over from where you are outside the top-level subvolumes.

sudo mv restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944376/.snapshots @/.snapshots

Once everything is up and working, you can easily delete unneeded extra subvolumes with Btrfs Assistant.

I hope that helps, let us know how it goes.

5 Likes

First you should check if there is an issue here or not. It is probably already correct.

Since .snapshots is a subvolume that contains snapshots. This should be done with a mv as well, not a snapshot.

Are you sure you can remount the root filesystem this way?

Isn’t it as simple as this:

sudo mount -o subvolid=0 /dev/nvme0n1p2 /mnt
sudo mv /mnt/@ /mnt/old@
sudo mv /mnt/restore_backup_@_202859658 /mnt/@
sudo rmdir /mnt/@/.snapshots
sudo mv /mnt/restore_backup_restore_backup_restore_backup_@_202859658_213948730_210944376/.snapshots /mnt/@/.snapshots

Boot off ISO, chroot in and run:

update-grub

I do not think it is already correct, see the mount options in use:

I think copying over the old fstab (assuming it is good) will be way easier than editing this one with the correct mount options.

That is a good suggestion, thanks! I edited the post above, this is a better idea for a number of reasons.

Yes. :grin:

Don’t forget, with this method a different root filesystem is not being mounted. It is the same root, only renamed.

I am not sure how this is simpler in any way, but it does effectively illustrate how there are multiple ways this could be accomplished.

Booting off the ISO and chrooting in seems like an unnecessary complication. It should work just fine from where he is if he fixes the mounts first, don’t you think so?

Thank you a lot, it worked like a charm.
I used the second method and it looks like everything is back to normal :smiley:

The only thing that kind of broke in between was grub where i had to boot from the grub rescue commandline. I currently upgrade with

garuda-update

and try if that fixes it.
Apart from that my problem is solved

The subvol for the root comes from the kernel options in grub. That overrides /etc/fstab

I suspect this is not needed at all but it is easy to check but looking at /etc/fstab

No, it has the same name. However, it is a different subvolume. Since btrfs doesn’t care about the names internally, this is basically the same as remounting the root on a different subvolume.

EDIT: Actually…in this case, it is the same subvol. So you probably don’t even need to remount it at all. Renaming it is probably all you need.

Only if you can successfully remount the root on a running system. Otherwise grub will pull in the wrong subvol which is what caused the issue in the first place.

EDIT: Since we are just renaming it, it will probably work fine to run update-grub without the chroot.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.