NVIDIA GPU undervolting on Garuda

Hello, does anyone know if Garuda have per-installed packages or officially supported tools to undervolt Nvidia GPU’s. I know that you have “Green with envy” package, but you can’t use it for undervolting GPU if i am not wrong.

My garuda-inxi:

System:
Kernel: 6.11.7-arch1-1-znver3 arch: x86_64 bits: 64 compiler: gcc v: 14.2.1
clocksource: hpet avail: acpi_pm
parameters: BOOT_IMAGE=/@/boot/vmlinuz-linux-znver3
root=UUID=2ac16174-9475-4394-9710-2e9d5d0d7863 rw rootflags=subvol=@
quiet loglevel=3 amdgpu.ppfeaturemask=0xffffffff ibt=off
Desktop: KDE Plasma v: 6.2.3 tk: Qt v: N/A info: frameworks v: 6.8.0
wm: kwin_x11 vt: 2 dm: SDDM Distro: Garuda base: Arch Linux
Machine:
Type: Laptop System: MAIBENBEN product: MaiBook X series v: Standard
serial: <superuser required>
Mobo: MAIBENBEN model: X558 v: Standard serial: <superuser required>
part-nu: X558 uuid: <superuser required> UEFI: American Megatrends LLC.
v: N.1.50MBB01 date: 08/20/2022
Battery:
ID-1: BAT0 charge: 8.8 Wh (24.9%) condition: 35.3/46.7 Wh (75.6%)
volts: 10.6 min: 11.4 model: standard type: Li-ion serial: <filter>
status: discharging
CPU:
Info: model: AMD Ryzen 7 5800H with Radeon Graphics bits: 64 type: MT MCP
arch: Zen 3 gen: 3 level: v3 note: check built: 2021-22
process: TSMC n7 (7nm) family: 0x19 (25) model-id: 0x50 (80) stepping: 0
microcode: 0xA50000C
Topology: cpus: 1x dies: 1 clusters: 1 cores: 8 threads: 16 tpc: 2
smt: enabled cache: L1: 512 KiB desc: d-8x32 KiB; i-8x32 KiB L2: 4 MiB
desc: 8x512 KiB L3: 16 MiB desc: 1x16 MiB
Speed (MHz): avg: 3553 min/max: 400/4463 boost: enabled scaling:
driver: amd-pstate-epp governor: performance cores: 1: 3553 2: 3553 3: 3553
4: 3553 5: 3553 6: 3553 7: 3553 8: 3553 9: 3553 10: 3553 11: 3553 12: 3553
13: 3553 14: 3553 15: 3553 16: 3553 bogomips: 102248
Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
Vulnerabilities: <filter>
Graphics:
Device-1: NVIDIA GA106M [GeForce RTX 3060 Mobile / Max-Q]
vendor: AIstone Global driver: nvidia v: 565.57.01
alternate: nouveau,nvidia_drm non-free: 550.xx+ status: current (as of
2024-09; EOL~2026-12-xx) arch: Ampere code: GAxxx process: TSMC n7 (7nm)
built: 2020-2023 pcie: gen: 1 speed: 2.5 GT/s lanes: 8 link-max: gen: 4
speed: 16 GT/s lanes: 16 ports: active: none empty: DP-1,HDMI-A-1,eDP-1
bus-ID: 01:00.0 chip-ID: 10de:2520 class-ID: 0300
Device-2: Advanced Micro Devices [AMD/ATI] Cezanne [Radeon Vega Series /
Radeon Mobile Series] vendor: AIstone Global driver: amdgpu v: kernel
arch: GCN-5 code: Vega process: GF 14nm built: 2017-20 pcie: gen: 3
speed: 8 GT/s lanes: 16 link-max: gen: 4 speed: 16 GT/s ports:
active: eDP-2 empty: none bus-ID: 06:00.0 chip-ID: 1002:1638
class-ID: 0300 temp: 39.0 C
Device-3: Chicony HD Webcam driver: uvcvideo type: USB rev: 2.0
speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 1-4:4 chip-ID: 04f2:b711
class-ID: fe01 serial: <filter>
Display: x11 server: X.Org v: 21.1.14 with: Xwayland v: 24.1.4
compositor: kwin_x11 driver: X: loaded: amdgpu,nvidia
unloaded: modesetting,nouveau alternate: fbdev,nv,vesa dri: radeonsi
gpu: amdgpu display-ID: :0 screens: 1
Screen-1: 0 s-res: 1920x1080 s-dpi: 96 s-size: 508x285mm (20.00x11.22")
s-diag: 582mm (22.93")
Monitor-1: eDP-2 mapped: eDP-1 model: BOE Display 0x090f built: 2020
res: 1920x1080 hz: 144 dpi: 142 gamma: 1.2 size: 344x194mm (13.54x7.64")
diag: 395mm (15.5") ratio: 16:9 modes: max: 1920x1080 min: 640x480
API: EGL v: 1.5 hw: drv: nvidia drv: amd radeonsi platforms: device: 0
drv: nvidia device: 2 drv: radeonsi device: 3 drv: swrast gbm: drv: nvidia
surfaceless: drv: nvidia x11: drv: radeonsi inactive: wayland,device-1
API: OpenGL v: 4.6.0 compat-v: 4.5 vendor: amd mesa v: 24.2.6-arch1.1
glx-v: 1.4 direct-render: yes renderer: AMD Radeon Graphics (radeonsi
renoir LLVM 18.1.8 DRM 3.59 6.11.7-arch1-1-znver3) device-ID: 1002:1638
memory: 500 MiB unified: no
API: Vulkan v: 1.3.295 layers: 8 device: 0 type: integrated-gpu name: AMD
Radeon Graphics (RADV RENOIR) driver: mesa radv v: 24.2.6-arch1.1
device-ID: 1002:1638 surfaces: xcb,xlib device: 1 type: discrete-gpu
name: NVIDIA GeForce RTX 3060 Laptop GPU driver: nvidia v: 565.57.01
device-ID: 10de:2520 surfaces: xcb,xlib device: 2 type: cpu name: llvmpipe
(LLVM 18.1.8 256 bits) driver: mesa llvmpipe v: 24.2.6-arch1.1 (LLVM
18.1.8) device-ID: 10005:0000 surfaces: xcb,xlib
Audio:
Device-1: NVIDIA GA106 High Definition Audio vendor: AIstone Global
driver: snd_hda_intel v: kernel pcie: gen: 1 speed: 2.5 GT/s lanes: 8
link-max: gen: 4 speed: 16 GT/s lanes: 16 bus-ID: 01:00.1
chip-ID: 10de:228e class-ID: 0403
Device-2: Advanced Micro Devices [AMD] ACP/ACP3X/ACP6x Audio Coprocessor
vendor: AIstone Global driver: N/A alternate: snd_pci_acp3x,
snd_rn_pci_acp3x, snd_pci_acp5x, snd_pci_acp6x, snd_acp_pci,
snd_rpl_pci_acp6x, snd_pci_ps, snd_sof_amd_renoir, snd_sof_amd_rembrandt,
snd_sof_amd_vangogh, snd_sof_amd_acp63 pcie: gen: 3 speed: 8 GT/s
lanes: 16 link-max: gen: 4 speed: 16 GT/s bus-ID: 06:00.5
chip-ID: 1022:15e2 class-ID: 0480
Device-3: Advanced Micro Devices [AMD] Family 17h/19h HD Audio
vendor: AIstone Global driver: snd_hda_intel v: kernel pcie: gen: 3
speed: 8 GT/s lanes: 16 link-max: gen: 4 speed: 16 GT/s bus-ID: 06:00.6
chip-ID: 1022:15e3 class-ID: 0403
API: ALSA v: k6.11.7-arch1-1-znver3 status: kernel-api with: aoss
type: oss-emulator tools: N/A
Server-1: PipeWire v: 1.2.6 status: active with: 1: pipewire-pulse
status: active 2: wireplumber status: active 3: pipewire-alsa type: plugin
4: pw-jack type: plugin tools: pactl,pw-cat,pw-cli,wpctl
Network:
Device-1: Realtek RTL8125 2.5GbE vendor: AIstone Global driver: r8169
v: kernel pcie: gen: 2 speed: 5 GT/s lanes: 1 port: e000 bus-ID: 02:00.0
chip-ID: 10ec:8125 class-ID: 0200
IF: enp2s0 state: down mac: <filter>
Device-2: MEDIATEK MT7921K Wi-Fi 6E 80MHz driver: mt7921e v: kernel pcie:
gen: 2 speed: 5 GT/s lanes: 1 bus-ID: 04:00.0 chip-ID: 14c3:0608
class-ID: 0280
IF: wlp4s0 state: up mac: <filter>
Info: services: NetworkManager, systemd-timesyncd, wpa_supplicant
Bluetooth:
Device-1: MediaTek Wireless_Device driver: btusb v: 0.8 type: USB rev: 2.1
speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 3-4:4 chip-ID: 0e8d:0608
class-ID: e001 serial: <filter>
Report: btmgmt ID: hci0 rfk-id: 0 state: up address: <filter> bt-v: 5.2
lmp-v: 11 status: discoverable: no pairing: no class-ID: 6c010c
Drives:
Local Storage: total: 1.4 TiB used: 299.05 GiB (20.9%)
SMART Message: Unable to run smartctl. Root privileges required.
ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Intel model: SSDPEKNU512GZ
size: 476.94 GiB block-size: physical: 512 B logical: 512 B speed: 31.6 Gb/s
lanes: 4 tech: SSD serial: <filter> fw-rev: 002C temp: 33.9 C scheme: GPT
ID-2: /dev/nvme1n1 maj-min: 259:4 vendor: A-Data model: LEGEND 960
size: 953.87 GiB block-size: physical: 512 B logical: 512 B speed: 63.2 Gb/s
lanes: 4 tech: SSD serial: <filter> fw-rev: A231W74A temp: 36.9 C
scheme: GPT
Partition:
ID-1: / raw-size: 199.61 GiB size: 199.61 GiB (100.00%)
used: 21.47 GiB (10.8%) fs: btrfs dev: /dev/nvme1n1p5 maj-min: 259:9
ID-2: /boot/efi raw-size: 400 MiB size: 399.2 MiB (99.80%)
used: 584 KiB (0.1%) fs: vfat dev: /dev/nvme1n1p6 maj-min: 259:10
ID-3: /home raw-size: 632.86 GiB size: 632.86 GiB (100.00%)
used: 277.57 GiB (43.9%) fs: btrfs dev: /dev/nvme1n1p4 maj-min: 259:8
ID-4: /var/log raw-size: 199.61 GiB size: 199.61 GiB (100.00%)
used: 21.47 GiB (10.8%) fs: btrfs dev: /dev/nvme1n1p5 maj-min: 259:9
ID-5: /var/tmp raw-size: 199.61 GiB size: 199.61 GiB (100.00%)
used: 21.47 GiB (10.8%) fs: btrfs dev: /dev/nvme1n1p5 maj-min: 259:9
Swap:
Kernel: swappiness: 133 (default 60) cache-pressure: 100 (default) zswap: no
ID-1: swap-1 type: zram size: 15.03 GiB used: 0 KiB (0.0%) priority: 100
comp: zstd avail: lzo,lzo-rle,lz4,lz4hc,842 max-streams: 16 dev: /dev/zram0
ID-2: swap-2 type: partition size: 20.01 GiB used: 0 KiB (0.0%)
priority: -2 dev: /dev/nvme1n1p3 maj-min: 259:7
Sensors:
System Temperatures: cpu: 50.6 C mobo: 46.0 C gpu: amdgpu temp: 41.0 C
Fan Speeds (rpm): N/A
Info:
Memory: total: 16 GiB note: est. available: 15.03 GiB used: 5.01 GiB (33.3%)
Processes: 368 Power: uptime: 11m states: freeze,mem,disk suspend: s2idle
wakeups: 0 hibernate: platform avail: shutdown, reboot, suspend, test_resume
image: 5.96 GiB services: org_kde_powerdevil, power-profiles-daemon,
upowerd Init: systemd v: 256 default: graphical tool: systemctl
Packages: pm: pacman pkgs: 1500 libs: 450 tools: octopi,paru Compilers:
clang: 18.1.8 gcc: 14.2.1 Shell: garuda-inxi default: fish v: 3.7.1
running-in: konsole inxi: 3.3.36
Garuda (2.6.26-1):
System install date:     2024-11-05
Last full system update: 2024-11-12
Is partially upgraded:   No
Relevant software:       snapper NetworkManager dracut nvidia-dkms
Windows dual boot:       No/Undetected
Failed units:

I’m using a discrete Nvidia GPU on my laptop and I want to undervolt both GPU and CPU to make laptop work cooler.

Check out the ideas in this Reddit thread:

https://reddit.com/r/linux_gaming/comments/1crljgi/is_there_an_easy_way_to_underclock_nvidia/

1 Like

Checked this thread already. It doesn’t have the info about reducing GPU voltage unfortunately.

But i have found this thread:

https://www.reddit.com/r/linux_gaming/comments/1fm17ea/undervolting_nvidia_gpu_in_2024/

And this advice from the thread is the most suitable for the most cases i think:

Nvidia doesn’t provide direct access to the voltage value but voltage is still directly tied to the clock: the GPU will auto adjust voltage based on a modifiable curve which binds the two values together (higher clock requires more volts, lower clock requires less volts). If you apply a positive offset to this clock-voltage curve, you force the GPU to use a lower-than-default voltage value for a given clock value, which is effectively an undervolt.

I do this on my 3090 to dramatically lower temperatures for almost no performance loss. It’s very easy to do with a Python script which will work in both X11 and Wayland sessions but you need to install a library providing the bindings for the NVIDIA Management Library API. On ArchLinux you can install them from the AUR: yay -S python-nvidia-ml-py.

You can then run a simple Python script as root, mine looks like this:

#!/usr/bin/env python
from pynvml import * 
nvmlInit() device = nvmlDeviceGetHandleByIndex(0)
nvmlDeviceSetGpuLockedClocks(device,210,1695)
nvmlDeviceSetGpcClkVfOffset(device,255)
nvmlDeviceSetPowerManagementLimit(device,315000)
  • nvmlDeviceSetGpuLockedClocks sets minimum and maximum GPU clocks, I need this bacause my GPU runs at out-of-specification clock values by default because it’s one of those dumb OC edition cards. You can find valid clock values with nvidia-smi -q -d SUPPORTED_CLOCKS but if you’re happy with the maximum clock values of your GPU, you can omit this line.
  • nvmlDeviceSetGpcClkVfOffset offsets the curve, this is the actual undervolt. My GPU is stable at +255MHz, you have to find your own value. To clarify again, this doesn’t mean the card will run at a maximum of 1695 + 255 = 1950 MHz, it just means that, for example, at 1695 MHz it will use the voltage that it would’ve used at 1440 MHz before the offset.
  • nvmlDeviceSetPowerManagementLimit sets the power limit which has nothing to do with undervolting and can be omitted. The GPU will throttle itself (reduce clocks) to stay within this value (in my case 315W).

Once you find the correct values, you can run the script with a systemd service on boot:

[Unit] 
Description=Undervolt the first available Nvidia GPU device 
[Service] 
Type=oneshot ExecStart=/etc/systemd/system/%N 
[Install] 
WantedBy=graphical.target

Rename the Python script undervolt-nvidia-device and the service undervolt-nvidia-device.service and put them both in /etc/systemd/system, then systemctl daemon-reload and systemctl enable --now undervolt-nvidia-device.service.

If you don’t like systemd, there are many other ways to automatically run a script as root, but please make sure that your GPU is stable first by manually running the Python script in your current session and testing stability after every new offset you put in before you have it run automatically, that way if your session locks up you can force a reboot and the GPU will go back to its default values."

Update:

Yeah, it works and allows user to Undervolt Nvidia GPU. You can use this method on X11

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.