Btrfs-assistant not showing snapshots to restore to when booted into a snapshot

What have you changed? Why are you expecting a different result?

It looks like you still have subvolid values in your fstab so I guess I would expect the rest of your setup is the same as well.

1 Like

Sorry I meant to take those out and was currently in the middle of doing that when you commented.

I removed them, regenerated mkinitcpio -P, regenerated grub config, rebooted, made a new snapshot, rebooted into it. New outputs:

IN A SNAPSHOT:

~$ sudo btrfs subvol list /
[sudo] password for user1: 
ERROR: not a btrfs filesystem: /
ERROR: can't access '/'
~$ cat /etc/fstab
# Static information about the filesystems.
# See fstab(5) for details.

# <file system> <dir> <type> <options> <dump> <pass>
# /dev/vda2
UUID=8421b3f7-7a99-4db1-818b-86762a56b09e	/         	btrfs     	rw,relatime,discard=async,space_cache=v2,subvol=/@	0 0

# /dev/vda1
UUID=0907-05CE      	/boot     	vfat      	rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro	0 2

# /dev/vda2
UUID=8421b3f7-7a99-4db1-818b-86762a56b09e	/.snapshots	btrfs     	rw,relatime,discard=async,space_cache=v2,subvol=/@.snapshots	0 0

# /dev/vda2
UUID=8421b3f7-7a99-4db1-818b-86762a56b09e	/home     	btrfs     	rw,relatime,discard=async,space_cache=v2,subvol=/@home	0 0

# /dev/vda2
UUID=8421b3f7-7a99-4db1-818b-86762a56b09e	/var/cache/pacman/pkg	btrfs     	rw,relatime,discard=async,space_cache=v2,subvol=/@pkg	0 0

# /dev/vda2
UUID=8421b3f7-7a99-4db1-818b-86762a56b09e	/var/log  	btrfs     	rw,relatime,discard=async,space_cache=v2,subvol=/@log	0 0
~$ findmnt --real
TARGET                SOURCE                  FSTYPE      OPTIONS
/run/user/1000/doc    portal                  fuse.portal rw,nosuid,nodev,relatime,user_id=1000,group_id=1000
/.snapshots           /dev/vda2[/@.snapshots] btrfs       rw,relatime,discard=async,space_cache=v2,subvolid=260,subvol=/@.snapshots
/home                 /dev/vda2[/@home]       btrfs       rw,relatime,discard=async,space_cache=v2,subvolid=257,subvol=/@home
/var/cache/pacman/pkg /dev/vda2[/@pkg]        btrfs       rw,relatime,discard=async,space_cache=v2,subvolid=259,subvol=/@pkg
/var/log              /dev/vda2[/@log]        btrfs       rw,relatime,discard=async,space_cache=v2,subvolid=258,subvol=/@log
/boot                 /dev/vda1               vfat        rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-r

Okay, what subvolume are you taking a snapshot of? Please describe specifically how you are taking a snapshot (the one you are booting into).

Ignore this sentence, I'm reposting the last comment but showing that it was a response, so I have to add this sentence to make the posts different.


I use this command:

sudo snapper -c root create --description "enter a description here"

Here's my root config:

$ sudo cat /etc/snapper/configs/root 

# subvolume to snapshot
SUBVOLUME="/"

# filesystem type
FSTYPE="btrfs"


# btrfs qgroup for space aware cleanup algorithms
QGROUP=""


# fraction or absolute size of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"

# fraction or absolute size of the filesystems space that should be free
FREE_LIMIT="0.2"


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS="wheel"

# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="50"
NUMBER_LIMIT_IMPORTANT="10"


# create hourly snapshots
TIMELINE_CREATE="yes"

# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"

# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="10"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_YEARLY="10"


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"

# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"

Is it possible the problem stems from grub-btrfs using the overlayfs technique, making it so that snapshots are not loading as btrfs filesystems?

If so, then how are people booting into GUI snapshots and using btrfs-assistant in snapshots to restore them?

Okay, Iā€™ve taken a look and Iā€™m thinking this output, although rather strange, may just be a consequence of being booted into the overlayfs:

I am able to reproduce this by booting into a snapshot and not immediately restoring the snapshot.

sudo btrfs subvolume list /
[sudo] password for jeremy:
ERROR: not a btrfs filesystem: /
ERROR: can't access '/'

Bear in mind, the overlayfs is kind of a weird environment to begin withā€“see the explanation from the grub-btrfs GitHub page:

Using overlayfs, the booted snapshot will behave like a live-cd in non-persistent mode.
The snapshot will not be modified, the system will be able to boot correctly, because a writeable folder will be included in the ram.
(no more problems due to /var not open for writing)

Any changes in this system thus started will be lost when the system is rebooted/shutdown.

Perhaps we can chalk that behavior up to overlayfs weirdness.

I can not reproduce the behavior you have with Btrfs Assistant not working. In my case, while booted into the snapshot in overlayfs mode (not restored), I have all the expected targets in the drop-down menu of the Browse/Restore tab.

Browsing and restoring snapshots works as expected.

Any thoughts on how to troubleshoot it? It's really good news knowing that others using grub-btrfs CAN use btrfs-assistant inside the snapshots and successfully restore that way.

For context:

  • I set up arch using archinstall defaults for btrfs (no encryption)
  • This is EXACTLY the step by step process I use to set up snapper:
  1. sudo pacman -S snapper inotify-tools grub-btrfs snap-pac --noconfirm

  2. yay -S btrfs-assistant --noconfirm

  3. add grub-btrfs-overlayfs to the end of the HOOKS=() array in /etc/mkinitcpio.conf.

  4. if systemd is mentioned in the HOOKS=() array in /etc/mkinitcpio.conf, then replace it with udev.

  5. sudo mkinitcpio -P

  6. sudo umount /.snapshots

  7. sudo rm -rf /.snapshots

  8. sudo snapper -c root create-config /

  9. sudo btrfs subvolume delete /.snapshots

  10. sudo mkdir /.snapshots

  11. sudo mount -a

  12. sudo chmod 750 /.snapshots

  13. in /etc/snapper/configs/root, add wheel to ALLOW_GROUPS=""

  14. sudo btrfs subvol set-default 256 /

  15. sudo systemctl enable --now grub-btrfsd

  16. sudo systemctl status grub-btrfsd

  17. sudo grub-mkconfig -o /boot/grub/grub.cfg

  18. sudo systemctl enable --now snapper-timeline.timer

  19. sudo systemctl enable --now snapper-cleanup.timer

  20. sudo snapper -c root create --description "first snapshot"

Ah, you must be Cheeto from the Arch forum then. :eyes:

I believe your issue is coming from this part of your process:

Iā€™m not sure why you are doing that, but I am guessing you have your reasons.

Your setup places the snapshots in a top-level subvolume:

This deviates from the default nested subvolume layout used by Snapper (where if the root is @, then the snapshots would be @/.snapshotsā€“not @.snapshots).

To use Btrfs Assistant with a custom subvolume layout like this, you need to announce the mapping you have chosen in /etc/btrfs-assistant.conf according to the notes in the configuration file:

# In this section you can manually specify the mapping between a subvol and it's snapshot directory.
# This should only be needed if you aren't using the default nested subvols used by snapper.
#
# The format is <name> = "<snashot subvol>,<source subvol>,<UUID>"
# All should be paths relative the root of the btrfs volume and the UUID is the UUID of the filesystem
# For example, a line might look like this:
# root = "@snapshots,@,48bee883-0eef-4332-9bc5-65f01295e470"
[Subvol-Mapping]

In your case, I think you would set it up like this:

root = "@.snapshots,@,8421b3f7-7a99-4db1-818b-86762a56b09e"

A few things to clarify.

  • The error you are getting when running sudo btrfs subvol list / when booted off a snapshot is normal. You can avoid it by running sudo btrfs subvol list /home instead.
  • Even though using non-standard snapshot locations like @.snapshots makes everything harder for everyone, it should work. The most recent version added code to autodetect this scenario so it shouldn't even require special config.
  • However, when using an overlay, you may need the entry in the config file. The problem is that it loses the ability to detect the root subvolume because it is hidden by the overlay.

That is likely the issue.

Ah, you must be Cheeto from the Arch forum then. :eyes:
:face_with_open_eyes_and_hand_over_mouth:

Moving the snapshots folder comes from the arch wiki:
https://wiki.archlinux.org/title/snapper#Configuration_of_snapper_and_mount_point

They explain the reason as: ā€œThis will make all snapshots that snapper creates be stored outside of the @ subvolume, so that @ can easily be replaced anytime without losing the snapper snapshots.ā€

But OMG IT WORKS NOW!! You are all lifesavers and superheroes! THANK YOU!!

Iā€™m learning and I really hope one day I can help people like you do. It really is incredible.

@dalto do you mean the most recent version of btrfs-assistant? It works great without manually changing the config, as long as Iā€™m in my ā€œrealā€ system (i.e. not a snapshot). Iā€™ll just update the config until I learn that something has changed, because thatā€™s the key that makes it work with grub-btrfs-overlayfs.

Being noted in the Arch-wiki as an idea doesnā€™t make it less non-default. It might make it easier if you are manually restoring snapshots, but if you are using a tool like btrfs-assistant to do it, it makes it a lot more complicated. Had you not done that, your setup instructions would be much simpler and it would have worked without needing a special config entry.

I understand. I didn't mean to imply that it was default, but rather that I set up arch using archinstall defaults.

Still learning, and I appreciate the feedback. Thank you again. Wonderful community!


Totally unrelated, but all this and some reviews I've seen has got me interested in trying Garuda. However, at least the gnome version will not even finish booting to the live environment in a VM (fails to load user or something and just keeps blinking the bootup processes text). That may be stated somewhere, that it's not made for VM, I haven't really looked into it yet. But, just letting you know the product and the people seem great :+1::+1:.

1 Like

From Download Page

Recommendations

  • Dual booting Garuda Linux may lead to unexpected issues! Be aware that the other OS may change the EFI boot priorities on UEFI or overwrite the bootloader on BIOS systems.

* Our distro is optimized for performance on real hardware. Installing in virtual machines is not recommended as it might result in a bad experience! (e.g. setup assistant not working)

Since you are learning and experimenting with Btrfs and subvolumes anyway: as an alternative to installing in a VM, consider testing Garuda by installing in subvolumes instead.

You can set up a fresh installation within the span of a few minutes, tinker around as much as you like, then delete the subvolumes when you are finished if you like and itā€™s like they were never there. No need to add partitions to the disk or resize filesystems.

A rough outline of what this looks like is on the Garuda Wiki page:

Bear in mind that guide is written assuming the default subvolume layout of a Garuda installā€“obviously adjust according to whatever the subvolume layout of the base system happens to be. The important thing is to rename the subvolumes so that the default subvolumes that Garuda will set up are not in use when the installer runs.

See also this lengthy topic for a more extreme example of this kind of setup:

The community is great! The forum is a lot of fun. Interestingly, unlike this topic most of the issues in the forum are related to Garuda Linux. :eyes: :rofl:

By the way:

The forum software is Discourse. As you mentioned, a lot of distros use Discourse for their forums (fora?). They all end up slightly different from each other due to the variety settings available, including various plugins that can be added for enabling extra features, etc.

The Garuda Linux forum, of course, has the very best plugins available. :wink:

1 Like

Actually, there are more requests for any games or unsupported hard- or software.
Like the Btrfs assistant.

  • *Unlike the Arch Forum, we donā€™t leave these unanswered either, so that our moderators donā€™t get bored here.

Because there are very rarely errors caused by Garuda Linux. :smiley:

* (However, this was planned differently three years ago.) :grin:

2 Likes

I will 100% be looking into having different installs on other subvolumes that's VERY much up my alley! Thank you guys for the help and the links!

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.