Garuda Update change request


Garuda update's rate-mirror line is

rate-mirrors --allow-root --save=$MIRRORLIST_TEMP arch --max-delay=21600 > /dev/null

Would it be possible to add --concurrency=16 to that so it speeds things up?


That would be nice. I second the motion.


Is it ok like that @TNE :slight_smile: ?


Thanks for the PR. I only have a work gitlab account. Guess I need to create one for home so I can do this myself in the future. That way everyone can git blame me when stuff breaks. :rofl:


The other tweak I made locally to improve performance was to increase parallel downloads in pacman.conf to 10.

hi @SGS ,
concurrency is a generic option, so it should go before arch subcommand as follows:
rate-mirrors --allow-root --save=$MIRRORLIST_TEMP --concurrency=16 arch --max-delay=21600 > /dev/null

e.g. rate-mirrors --help shows generic options, while rate-mirrors arch --help arch specific ones.

1 Like

I suspect that concurrency and pacmac changes would be applicable to everyone except those on sub-200mb speeds. I wonder if it would make more sense to include two (or three) buttons on the app launcher. e.g.

concurrency = 4
parallel = 2

concurrency = 8
parallel = 5

concurrency = 16
parallel = 10

1 Like

Such things need to be tested first, or, at least, discussed/adapted. I'm pretty convinced that the bast majority of users use slow connections, like, 50mbps or less.

I do not see any reason to bump it to 16 when 8 is already a good number of concurrent connections and safe for everyone.


That's why I didn't put it into production. (No bump pkgver)


Here is my data for what it is worth.

time rate-mirrors --allow-root --concurrency=8 --disable-comments arch --max-delay=21600
Executed in   35.79 secs

time rate-mirrors --allow-root --concurrency=16 --disable-comments arch --max-delay=21600
Executed in   22.03 secs

This detail is very significant if changing the default is being considered.

In cases where the user’s connection is the bottleneck (not the mirrors), adding additional concurrencies may do more harm than good, because the results of the “dirty check” will be skewed. The fastest of sixteen mirrors is meaningless if all sixteen mirrors are being throttled.

This discussion seems relevant: rate-mirrors: Everyday-use client-side map-aware tool / Community Contributions / Arch Linux Forums

  1. “Dirty check”: 8 mirrors are tested concurrently. My assumption is that a fast mirror should be notable even in such conditions. Also the tool waits until a speed is ±stable for every mirror.

Hm, Xyne, the Reflector creator, writes in above mentioned thread:

I completely agree that parallel speed testing makes absolutely no sense if the user connection is the bottleneck.

I’ll change the default number of threads to 1 but leave the option for those with better connections.

I tend to agree with him. Even the results of the “dirty check” might be bogus if the available bandwidth is not sufficient which means that the pre-selected mirrors in that first step might not necessarily be the fastest.

It may be best to leave the default as-is, and allow opting in for additional mirror concurrencies to be a special “hack” for those who benefit from a generous internet bandwidth.


@SGS noted.

I agree with you, @BluishHumility.

@rinchen what about proposing to the rate-mirrors devs to support a config file so that anyone can set whatever they want? That would be nice, though.





Wait its possible for people can get 50mbps? I thought the max was 9mbps at 2AM when everyone’s asleep…
oh wait im Australian, ofc our internet is that slow :upside_down_face:


rate-mirrors will gain a config file.


This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.

A config file was not able to be added but command line parameters were. Also, the default concurrency was raised. Github comment.

This is an update to the locked conversation: Garuda Update change request

A config file was not able to be added but command line parameters were. Also, the default concurrency was raised. Github comment.