Interested to talk about brtfs file system

Then why, when I install Garuda on an SSD, does it always enable autodefrag?

1 Like

The topic of autodefrag on SSD using BTRFS is all over the place. Someone pointed out that fragmented files that are heavily fragmented can really cause issues, so I guess the point is ...it depends on the situation.

1 Like

Why? Give reasons. There are many reasons for using autodefrag, it doesn’t break snapshots symlinks as well. Even on SSDs there are benefits, see Gotchas - btrfs Wiki.

2 Likes

I must have missed them somewhere in the haze of trying to wrap my head around BTRFS. When did filesystems become so complicated? :wink:

1 Like

I believe it was shortly after some idiot thought they being smart by inventing "writing".
If only we'd stuck with our God-given binary vocal language of Ugh & Urg!

1 Like

Right? BTRFS makes my anxiety flutter. I don't know enough about it to easy my comfort zone.....and my trust level is in the orange. Then someone brings up fragmentation. :exploding_head:

1 Like

This mount option greatly increases the number of write, which is very harmful to SSD. I accidentally forgot to disable this option on one of my servers, a lot of recording has occurred in a couple of days (this server is part of a distributed file storage, where the backups of databases are mainly stored).

I read that any harm to SSDs by using the autodefrag mount option may mainly occur in large databases. That would explain your conundrum.

I have spent the past hour reading manpages and wikis trying to determine whether or not to keep using the autodefrag mount option with SSDs (in BTRFS). The answer so far as I can determine at this point is "dunno."

1 Like

Add to that confusion, you shouldn't defrag a SSD because SSDs are most sensitive to the number of write cycles so you want to reduce those write cycles. Defragmentation writes a lot of data to a disk when moving files around to accommodate the process. Also the warranty on most SSD drives states that they end their warranty support once the drive has exceeded its number of warranteed write cycles. Defragmentation will only add to those cycles and force you out of the warranty coverage for no reason. blah...blah....head spin.

On top of that, SSD's fragment their stored data on their own, it is the way they work/write. So, to defrag it is counter-intuitive and not necessary. :man_shrugging:

1 Like

Doesn't fstrim take care of it anyway? I mean, really, I can't find any "authority" that explains the preferred mount options using BTRFS on an SSD for /root, including autodefrag or not. Much of the available material is somewhat dated (circa 2013).

2 Likes

From what I read, fstrim or ftrim does take care of it, if it is setup properly (meaning size, vague) during install? So I guess that would fall on Garuda's installer?? This is the part that confuses me.

2 Likes

Effing BTRFS, bruddah. Ext 3/4 wasn't complicated enough. I guess. I remember when ReiserFS was the next new thang. Until Hans did what he did. :wink:

3 Likes

Damn, I was hoping @dalto was gonna explain this, as they seem to have more concrete knowledge.

From what I've gleaned over the years:

  • For typical Desktop use, it's unlikely you'll write enough data to a >crappy (and/or QLC) drive to wear it out before it becomes obsolete due to small size / slow speed.
  • Frequent OS defragging will wear it out (much) faster with little benefit due to how the controller shuffles data around to minimize copy/erase/write cycles on NAND
  • The more available space left on a drive, the longer it'll last as it allows the drive to spread writes out across the drive before it has to copy/erase/write to an occupied NAND cell
  • TRIM simple informs the controller that data is no longer "valid" so it can overwrite that as and when it's needed. It doesn't defrag the drive, though the drive may then reshuffle data if NAND cells have been freed up.
  • The "heavily fragmented files" issue should only be occurring if a high percentage of the capacity is used and/or a lot of small writes are occurring. If there's plenty of free space, the drive's controller should use the "unwritten NAND cells" to write those files to and eventually the NAND cells that had been occupied with that data will be freed up and available to be written to again.

There's A LOT of complexity to it, the take away is, if you've "invested" in a decent SSD and you keep 10-25% free, the drive should last AGES (for typical desktop use) and if you format the drive once every year or two for a fresh OS install, it'll do even better as it gets a clean slate (sort of, cos you do write a lot of data during an install that would have otherwise remained untouched).
Off the top of my head, to last 10 years, would require like 50GB of writes per day for 500GB drive, while maintaining 10% free space or something crazy like that (VERY ballpark figures).

4 Likes

So, you didn't say, should I use autodefrag as a mount option for /root on a small (128 GB) SSD in BTRFS? It appears, from what you've stated, that the answer is negative.

I appreciate your gleanings. It's obvious you've invested some time in doing so, and it makes using BTRFS a less uncomfortable fit. :slight_smile:

regards

2 Likes

Your deduction was accurate.

I'm a nerd whose enjoyed (PC) tech for going 3 decades, I've picked up some stuff, but whilst I often appear like an expert, I'm not as I've put little of the knowledge into practice for it to translate into experience and then wisdom.

I'll mention too, check trim timer's set to run.

sudo systemctl status btrfs-trim.timer

FWIW, I had a 120GB SSD (the cheaper of the two Hyper X models, 3K P/E cycles vs 5K), had it almost 10 years. The internals were still absolutely fine, had been my primary drive the whole time but one of the pins on the DATA line broke off this year. Too many physical system rebuilds over the years was the death of it.

4 Likes

Thanks for sharing your experience with SSDs. It is an area that only recently concerned me, along with BTRFS, and eff me, but if I want to delay further decay of braincells I can least afford nearing age-70, I must keep learning. Damnit. :wink:

3 Likes

I only disabled autodefrag today (well, yesterday now). I'd only checked if it was active due to this thread.
I think it's probably a worthy discussion as it may be worth having a script in garuda for enabling/disabling by user if they have HDD/SSD or running (in)appropriate services.
Also, whether autodefrag should be default on a "high performance" distro, but given some are focused on gaming where HDD are still common ... :confused:
Of course, that depends how much effort they wanna put into serving uber newbs.

2 Likes

The autodefrag operation for btrfs is confusing because there is lots of conflicting information on how it works. Also, please remember that defragmentation of a btrfs filesystem is not exactly the same as traditional filesystem because of the way the data is stored.

Here is my understanding, which I don't have tremendous confidence in because of all the conflicting info out there.

  • Using the autodefrag mount option on an SSD actually can improve read performance because it is moving data within blocks(and not just moving entire blocks of data). This reduces the number of blocks that have to be read.
  • Using this option can cause reflink breakage which increases the space used by snapshots(There is a lot of conflicting information on how much real-world impact this has)
  • There are many claims that the mount option also queues writes differently to write the data in less fragmented way to begin with. This is possible due to CoW. That being said, while I seen numerous claims of this from varying sources, I haven't found any official documentation that describes this either way.

For reference, I choose to not use the option on SSDs on my personal machines.

4 Likes

So question, I used 'fgrep btrfs /proc/mounts' and it shows I have autodefrag on all the partitions. How do I stop/remove autodefrag, by editing fstab?

Edit to add: never-mind, I found the solution to shut it down, I am just on the fence if I should or not. Arch wiki recommends using it. :man_shrugging:

1 Like

This is a hard topic. I'll like to be able to know how to better maintain my btrfs filesystem to optimize ssd operation and life. I have much to learn, and this thread has been great to get some leads.

Has anyone come up with new information or experience regarding degragmentation, fstrim, discard=async, balance, etc. for desktop/workstation/server workloads?

Regarding coments on the complexity of Btrfs, I think it's worth the learning experience, posibilities and features that come along with it. I'm happy I have the possibility of taking time to better setup my system, so I can spare dealing with CoW filesystems than sticking to ext4.