Having some issues with snapper atm

So, i read the announcment and such and understood that, what was written there would sound good, problem is...it kinda doesnt work for me as well?
Some problems im having :

  • way the ■■■■ too many snapshots get saved for some reason. I have specified in snapper-settings tab, that i want to save 10, currently, for some reason, 32 are saved in my folder.
  • what i have seen in grub is really messy, honestly its kinda hard to navigate it in the grub menu since the fonts and everything are way too big, so that i cant read enough to navigate properly, idk if i have to change the theme or something
    -the worst thing personally. I cant restore while im normally booted. that is really annoying and a thing i wish the old system back tbh. Like if i know that i ■■■■■■ something up, i cant just restore and reboot, i have to reboot to go to the snapshot, and reboot another time. Is that intended? And if so, why. Its such a waste of time.

If someone can assist me in changing or fixing some of these problem id be very grateful.

2 Likes

I just noticed mine didn’t seem to be getting cleared, I’m tracking it presently after making a change.
E2A: I just checked after the latest cleanup and they are getting cleared.
Check your settings in the Garuda Assistant → Snapper - Tick “Show Settings Tab” → Snapper Settings → Set the number you’d like

They’re literally addressing this atm, the feature’s in testing and will (I assume) be integrated soon

4 Likes

Ahh thanks, i havent seen that, shouldve checked that board i guess.

1 Like



ye i dont know what you mean by cleanup, i guess that doesnt happen for me?

You're showing root snapshots, but home settings.
Wait, you have the profile selected as home, but the config name is showing as root with path as /

Have you been trying to set up home snapshots? I messed mine up previously trying to modify root to include home rather than creating a separate profile.

1 Like

no the window was just bugged, it shows how many snapshots i wanted, ignore the home thing

1 Like

Is the window bugged, or is the window displaying bugged configs?

what's the content of:

/etc/snapper/configs/root
/etc/snapper/configs/home
1 Like
# subvolume to snapshot# subvolume to snapshot
SUBVOLUME="/"

# filesystem type
FSTYPE="btrfs"


# btrfs qgroup for space aware cleanup algorithms
QGROUP=""


# fraction or absolute size of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"

# fraction or absolute size of the filesystems space that should be free
FREE_LIMIT="0.2"


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""

# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="5"
NUMBER_LIMIT_IMPORTANT="5"


# create hourly snapshots
TIMELINE_CREATE="yes"

# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"

# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="0"
TIMELINE_LIMIT_DAILY="5"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="0"
TIMELINE_LIMIT_YEARLY="0"


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"

# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"


SUBVOLUME="/"

# filesystem type
FSTYPE="btrfs"


# btrfs qgroup for space aware cleanup algorithms
QGROUP=""


# fraction or absolute size of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"

# fraction or absolute size of the filesystems space that should be free
FREE_LIMIT="0.2"


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""

# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="5"
NUMBER_LIMIT_IMPORTANT="5"


# create hourly snapshots
TIMELINE_CREATE="yes"

# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"

# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="0"
TIMELINE_LIMIT_DAILY="5"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="0"
TIMELINE_LIMIT_YEARLY="0"


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"

# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"


They're both / (root)

1 Like

huh? what do you mean?

1 Like

There are two configs that you posted in that block, they’re both for

SUBVOLUME="/"

hourly snapshots aren’t the default, but you have hourly snapshots, yet hour snapshots aren’t in your config. Did you previously set hourlys, how many did you set it to, when did you get rid of hourlys?

1 Like

ye i pasted twice, dont have a home config. or rather its a locked file for some reason.
I have never set hourlys.

1 Like

Can we see the output of:

systemctl list-timers
systemctl status snapper-cleanup.timer
systemctl status snapper-cleanup.service
1 Like

here you go

╭─benjamin@Rechenboy in ~ took 1ms
[🔴] × systemctl list-timers
NEXT                        LEFT       LAST                        PASSED            UNIT                         ACTIVATES
Tue 2021-11-30 18:00:00 CET 58min left Tue 2021-11-30 17:00:15 CET 1min 12s ago      snapper-timeline.timer       snapper-timeline.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-23 10:40:04 CET 1 week 0 days ago btrfs-balance.timer          btrfs-balance.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-23 10:40:04 CET 1 week 0 days ago btrfs-defrag.timer           btrfs-defrag.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-23 10:40:04 CET 1 week 0 days ago btrfs-scrub.timer            btrfs-scrub.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-23 10:40:04 CET 1 week 0 days ago btrfs-trim.timer             btrfs-trim.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-30 09:36:28 CET 7h ago            logrotate.timer              logrotate.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-30 09:36:28 CET 7h ago            man-db.timer                 man-db.service
Wed 2021-12-01 00:00:00 CET 6h left    Tue 2021-11-30 09:36:28 CET 7h ago            shadow.timer                 shadow.service
Wed 2021-12-01 01:51:16 CET 8h left    Mon 2021-11-29 10:50:57 CET 1 day 6h ago      snapper-cleanup.timer        snapper-cleanup.service
Wed 2021-12-01 01:55:45 CET 8h left    Mon 2021-11-29 10:55:25 CET 1 day 6h ago      systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Wed 2021-12-01 05:04:52 CET 12h left   Tue 2021-11-30 10:05:49 CET 6h ago            updatedb.timer               updatedb.service

11 timers listed.
Pass --all to see loaded but inactive timers, too.

╭─benjamin@Rechenboy in ~ took 7ms
╰─λ systemctl status snapper-cleanup.timer
● snapper-cleanup.timer - Daily Cleanup of Snapper Snapshots
Loaded: loaded (/usr/lib/systemd/system/snapper-cleanup.timer; enabled; vendor preset: disabled)
Active: active (waiting) since Mon 2021-11-29 10:40:25 CET; 1 day 6h ago
Trigger: Wed 2021-12-01 01:51:16 CET; 8h left
Triggers: ● snapper-cleanup.service
Docs: man:snapper(8)
man:snapper-configs(5)

Nov 29 10:40:25 Rechenboy systemd[1]: Started Daily Cleanup of Snapper Snapshots.

╭─benjamin@Rechenboy in ~ took 732ms
╰─λ systemctl status snapper-cleanup.service
○ snapper-cleanup.service - Daily Cleanup of Snapper Snapshots
Loaded: loaded (/usr/lib/systemd/system/snapper-cleanup.service; static)
Active: inactive (dead) since Mon 2021-11-29 10:51:07 CET; 1 day 6h ago
TriggeredBy: ● snapper-cleanup.timer
Docs: man:snapper(8)
man:snapper-configs(5)
Main PID: 15896 (code=exited, status=0/SUCCESS)
CPU: 50ms

Nov 29 10:50:57 Rechenboy systemd[1]: Started Daily Cleanup of Snapper Snapshots.
Nov 29 10:50:57 Rechenboy systemd-helper[15896]: running cleanup for 'root'.
Nov 29 10:50:57 Rechenboy systemd-helper[15896]: running number cleanup for 'root'.
Nov 29 10:50:57 Rechenboy systemd-helper[15896]: running timeline cleanup for 'root'.
Nov 29 10:50:57 Rechenboy systemd-helper[15896]: running empty-pre-post cleanup for 'root'.
Nov 29 10:51:07 Rechenboy systemd[1]: snapper-cleanup.service: Deactivated successfully.

1 Like

It looks normal to me. Snapper cleanup only runs so often and it saved 5 timeline snapshots last time it ran, which is correct.

All those hourly snapshots will get deleted next time it runs.

If you only want snapshots when when pacman runs you can uncheck the "Enable timeline snapshots" checkbox.

Alternatively, you could modify the cleanup timer to run more often which would remove the hourly snapshots much more quickly.

2 Likes

But why then do i have 20+ timeline snapshots? some of them it seems hourly?

Because they are taken hourly and cleanup job last ran 1 day and 6 hours ago.

1 Like

but i dont have hourly snapshots enabled?

Snapper always takes hourly snapshots when timeline snapshots are enabled. When you set hourly to 0, it means that it doesn’t save any hourly snapshots when the cleanup job runs.

That is why there are two ways to lessen your snapshot volume, run the cleanup job more often or disable timeline snapshots.

2 Likes

but isnt that like super dumb? that would mean i cant have 5 snapshots from the past 5 days, which has helped me in the past. Why would it work like that?

1 Like