Thanks for this, Ive been experiencing some freezes too,
when an application freezes (defunct), I tried to kill the process, but then the terminal freezes.
Revolting in a cold reboot.
But will check it out with another Kernel.
By the way you forgot to put some - between linux-lts and headers
Had the same problem, did the fix provided by @ServoGamer and it worked really well. I use to open one Chromium Tab and the complete system would enter in disk-sleep state. Now, on mid loads, like opening 5+ tabs at the same time does have a ~1 second freeze but it is nothing that feels ugly as before.
It's nice sharing a solution and confirming said solution, but I am not a fan of copy-paste a solution without background knowledge what it actually does.
So after reading up the wiki and manual, I assume.
My default commit= is 30 since its not defined in my /etc/fstab
Your suggested fix commit=15 0 2 would half the time between the interval of periodic transaction when data are synchronized to permanent storage.
The 0 2 is 0<dump> and 2<pass> from /etc/fstab ?
If you could elaborate why you have chosen these option and why this can help would be awesome for a newbie like me to feel saver handling my Garuda Linux.
No real reason to fear editing fstab. Simply make sure to make a backup before editing which can be restored from the terminal if you mess things up.
Here are some good btrfs informational links:
If you are nervous about changing your commit interval via fstab you can test an alternate method. This can be accomplished in a similar manner by using systemctl.
You can change the data being committed from every 30 seconds to every 5 seconds with sysctl.
Change /proc/sys/vm/dirty_expire_centisecs to 500 (5 seconds) from 3000 (30 seconds is the default).
Use su to login as root, (this change is only temporary):
echo 500 > /proc/sys/vm/dirty_expire_centisecs
You can check the /proc/sys/vm/dirty_expire_centisecs setting with:
cat /proc/sys/vm/dirty_expire_centisecs
If this improves your performance, the setting can be made permanent with a conf file in the /etc/sysctl.d/ directory.
As well as all my other fixes I'd tested, I'd forgotten to mention this step which I applied today that helped immensely:
I followed the recommended procedure of restoring a timeshift snapshot from the grub boot menu a day ago, I then did a GUI timeshift restore of my last snapshot once I booted into my desktop (as recommended). Afterwards shutdown and startup took much longer, and my system was extremely sluggish. Today I repeated what I thought may have helped with this type of system slowdown once before.
I opened the timeshift GUI and deleted all my backup snapshots. Some snapshots required being deleted twice before they are removed from the menu. All my snapshots are manually created (not autosnapshots). Quotas are disabled in the system and in timeshift. After all snapshots were removed, I then performed a btrsf balancing. I performed the balancing twice, the first time will take a fair while depending on how long its been since the last time. The second balance (although redundant) I simply run to be extra sure everything is in order. This is likely an excessive use of balancing, and probably is not recommended nor required.
After the balancing is complete I reboot, and the change is noticeable immediately. The system shuts down in the normal amount of time, and the reboot time is normal as well. As soon as the system is fully started a series of tests proved the system performance was back as well. I have performed this sequence several times now and in both cases it has definitely improved things considerably. The first time I performed this along with many other troubleshooting steps, so I really wasn't sure what improved matters.
In this case I'm fairly confident this was the key step in correcting the performance issues. I can't say if this will be a help for others as I'm using "on demand" snapshots with quotas disabled (not the standard timeshift default setup). The thing that led me to this was reading numerous posts that an excessive amount of snapshots could lead to system slowdowns. In my case I only had 4-6 snapshots, (hardly what I'd consider excessive). However, the difference was like night and day after the deletions and re-balancing steps.
Well that's encouraging that it seems to have more widespread application than to just myself.
I don't know whether this is applicable to systems that have not used the system restore feature before, or it simply helps after you've performed a restore. All I know as my system was working fine, I got a balky update where I couldn't login. I rolled the system back and it worked fine, but performance sucked.
After performing this procedure everything seemed to be back to normal. Updates afterwards were fine. I assume whatever caused the issue initially was quickly rectified.
Oh and of course, be sure to make a fresh timeshift snapshot after you've wiped the old ones (after the reboot).
In my case I do periodic balancing fairly regularly. So I guess the question would be how often is enough?
This answer would seem to vary greatly depending on the percentage of file churn on a weekly basis. As I do not use my home directory for storing large data files such as HD movies etc my amount of data churn is usually minimal.
The only large files stored on my system partitions are my system snapshots. When I delete my snaps which I do whenever I reach 4 or 5 backups I perform a balancing.
I'm sure others must have far greater data churn than myself as I store nothing in my home directory (it is all symlinked).
Same here - 2 x 500Gb SDD's symlinked under home, as backup_store and data_store - I doubt my post added more to this, but evidence is evidence, so I submitted my results to hopefully help.
This is starting to seem like a good option to implement.
What kind of time frame do you think might be suitable for this timer?
I'm guessing the amount it would be needed would vary greatly depending on the size of the drive(s) and the percentage of data being flushed on a regular basis.
The only problem being that on systems prone to freezing a balance operation is one of the commonly mentioned triggers. So then the question would be, is it better to have a timer set infrequently for a time that is not high usage such as midweek at 4 am. Or, would it be less likely to trigger a freeze if run more often, (as it would take far less time to run and finish).
If 24/7 was presumed, then yes - that sound sensible, but usage (like mileage) will obviously vary. and I can't envision a one-stop, fits-all solution. (but hell, I'm a noob)
My personal use is not 24/7, but per-day instances. In my case, manual would seem better IMHO.
Timers can of course be easily enabled and disabled. For those unfamiliar with systemd usage an otion to enable/disable a balance timer could probably be added to the Garuda Assistant app.