Been reading through other's long boot time and I read somewhere that linux-modules-cleanup.service is only supposed to run when the kernel has updates. Well, I can confirm that linux-modules-cleanup.service runs at every boot and has a time range between 1min 39s - 3m 40s (whenever I run systemd-analyze blame it's at the top of the list with those numbers so far).
The rest of the services then load in relatively quick order, from about 50s and less. For example, here's my latest output:
If anyone can shed some light on this service and what I might be able to do to optimize its load time? I only have 3 kernels installed: linux-zen (default and current running kernel), linux-lts, and linux (the last two for fallback purposes, JUST IN CASE). I do have some DKMS modules that load during boot, but not sure if that plays into it.
Anyway, thanks in advance to anyone who can help or provide advice!
This service cleans up modules from old kernels. Garuda employs a nice service which makes sure that no reboot is needed after upgrading the kernel, which keeps the modules of the currently booted kernel available after system upgrades. The service you are looking at removes the backed up modules of the old kernel.
What kind of storage do you have?
1TB HDD. Unfortunately, it’s running off an external drive via USB 3.1 connection, so I understand that there’s a performance hit from that, but 1:39 - 3:40 seems excessive even for the performance hit…
For me it usually takes seconds, also this should only happen after a kernel upgrade rather than every reboot. But I agree, that's indeed a quite heavy hit
Nope....consistently long run times during boot. Every time. It's helped since I've switched over to the linux-xanmod-cacule kernel, but just means it's processing a little faster.
which gives the real bottleneck.
The clean-up should not block the rest of the boot normally.
If there is an issue, post a bug report upstream (the github link is posted in a previous post).
❯ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
graphical.target @1min 47.482s
└─multi-user.target @1min 47.482s
└─smb.service @1min 32.292s +15.189s
└─nmb.service @58.138s +34.150s
└─network-online.target @57.866s
└─NetworkManager-wait-online.service @51.567s +6.297s
└─NetworkManager.service @46.257s +5.307s
└─dbus.service @46.249s
└─basic.target @46.237s
└─sockets.target @46.237s
└─nordvpnd.socket @46.193s +43ms
└─sysinit.target @46.116s
└─systemd-update-done.service @46.061s +54ms
└─ldconfig.service @38.158s +7.901s
└─local-fs.target @38.156s
└─boot-efi.mount @38.065s +91ms
└─systemd-fsck@dev-disk-by\x2duuid-1562\x2d3E70.service @37.223s +840ms
└─dev-disk-by\x2duuid-1562\x2d3E70.device @37.221s
You can see in the blame output, it states that linux-modules-cleanup.service (which runs during EVERY boot - is there a way to find out why? even when there aren't kernel updates?) took 2min 33.353s to complete. I understand that you're saying that while it took that long to complete, the system's not waiting for it to complete to execute on the chain.
However, you'll notice in the basic systemd-analyze output, it's indicating 3min 19.623s (userspace) = 3min 57.795 - which concurs with my manual stopwatch timing to when sddm's login screen rendered and was user-accessible.
critical-chain indicates that the graphical target was 1min 47.482s but that's not true based on IRL stopwatch timing.
Not really sure where to go from here.....and not even sure what bug I'm submitting to the github repo...
EDIT: I'm realizing now that of the total userspace time, it's indicating that 1min 47.482s was the graphical target portion.....I think....
I don’t think the system understands its role the same way as humans…
It receives an order, to find a target (graphical.target, in this case).
It does the job in (the shortest) time
Your stopwatch needs syncing to systemd
Apart from joking, if you want knowledge, you have to seek for it.
Start from this:
systemctl cat display-manager.service
systemctl cat linux-modules-cleanup.service
systemctl status display-manager.service
man systemd.unit
Get into journal and find when your graphical.target is reached and when display manager starts. Then find what’s in between them (if any).
linux-modules-cleanup.service starts on every boot, by design and runs ascript.
Study the script (it’s just terminal commands, that you may already know, or are useful to know) and experiment:
Disable the service (linux-modules-cleanup.service) and reboot.
Get analyze status.
Run the script’s commands one-by-one to see what 's happening and if there is any mysterious, not useful delay.
Thank you so much for this guidance! I'd consider myself an "advanced beginner" or "beginner intermediate" linux user (enough to seriously get into trouble) - so I'm pretty sure I can follow along your suggested approach here and see if I can come to any conclusions (or culprits).
Hopefully I'll have time to perform this forensic analysis before this thread closes and I can provide an update LOL.
Y'all..... I think I figured it out.... it just boils down to the linux-modules-cleanup.service actually just taking THAT long to run. From what I understand in the script, It enumerates through /usr/lib/modules/* and then rsync's backups into /usr/lib/modules/.old/. As I mentioned in an earlier post, I'm running Garuda off a 1TB external HDD connected via USB 3.0, and it's just slow - and I'm bottle-necked by this hardware for now. I currently have 3 kernels installed, along with their headers:
linux
linux-lts
linux-zen
linux-xanmod-cacule [current default kernel]
And it just takes THAT much time to enumerate through those paths, and make rsync backups..... I'd be too nervous to disable this permanently, since it seems like it's important to run, and I'm not sure I'd trust myself to remember to turn it back on....I'll have to read up more on how systemd works and see if I can have it execute on every 5th boot or something....
It's not that important. 99% of the arch users don't have it. You'll have a pleasant life without it. You'll need to reboot every time a kernel gets updated. Otherwise, modules that didn't load until the update will fail to load when needed.
I've been using arch since 2012, and never cared about installing it
This specific script's usefulness is questionable and theoretical.
Nobody has it installed except those who do.
And IMHO its logic is not exactly filling the reason it supposes to solve.
But it's my personal opinion.
As loli said, we can live without it, unless proven to fix a problem (which currently is not proven, apart from theories).