Failed to start Daemon that keeps amount of available memory

Hello,
I've recently had problems using KVM/QEMU, but I managed to solve the issue by following tbg's possible solution here: KVM/QEMU cgroup.controllers - No such file or directory
The solution actually worked, and I am now able to start VMs using QEMU, however, since I changed the value of " systemd.unified_cgroup_hierarchy" from 1 to 0, the memavaild.service Daemon fails to start:

× memavaild.service - Daemon that keeps amount of available memory
Loaded: loaded (/usr/lib/systemd/system/memavaild.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2021-10-17 14:35:35 BST; 19min ago
Docs: man:memavaild(8)
https://github.com/hakavlad/memavaild
Process: 14935 ExecStart=/usr/bin/memavaild -c /etc/memavaild.conf (code=exited, status=1/FAILURE)
Main PID: 14935 (code=exited, status=1/FAILURE)

Oct 17 14:35:35 PC systemd[1]: memavaild.service: Scheduled restart job, restart counter is at 5.
Oct 17 14:35:35 PC systemd[1]: Stopped Daemon that keeps amount of available memory.
Oct 17 14:35:35 PC systemd[1]: memavaild.service: Start request repeated too quickly.
Oct 17 14:35:35 PC systemd[1]: memavaild.service: Failed with result 'exit-code'.
Oct 17 14:35:35 PC systemd[1]: Failed to start Daemon that keeps amount of available memory.

I have tried:

systemctl enable --now memavaild.service

and

systemctl restart memavaild.service

I had no success with these two. Any idea of how I could fix this?

Welcome :slight_smile:
Post your inxi, like it asked in template, which you have delete.


Summary

Hello.

Post your terminal/konsole in- and output as text (no pictures) from:

inxi -Faz

Without it, you will not receive any help from the Garuda team or your topic is likely to be closed without notice.

It looks like a prerequisite actually...

1 Like

Does this mean there is no way I can use QEMU and memavaild at the same time?

From what I read there, I would say that you can't (at the moment), but I have no technical knowledge.
At the very least you could ask on the upstrem site.

Garuda's default systemd.unified_cgroup_hierarchy=1 setting was fine in the past. I'm assuming this is a temporary bug with some system component requiring a return to the older cgroups version. As the package that is creating the problem was never really identified it could take a bit, but a fix should be applied by upstream at some point.

I would simply revert the setting to "1" every week or so to see if a fix has come through. Hopefully a fix won't take too long to be applied from upstream. The only way to speed that process along would be to identify the offending upgrade and file a bug report with the project upstream.

Welcome to the Garuda forum. :wave:

3 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.