I updated the system too but I can't put working kvm virtual machines. When I try to start a virtual machine, I get this error message:
➜ sudo virsh start deb01
[sudo] password for jpiau:
error: Failed to start domain 'deb01'
error: Unable to read from '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2ddeb01.scope/libvirt/cgroup.controllers': No such file or directory
I'm using kernel-lts but I tried with kernel-zen too and nothing happens, the error is the same.
I think this is new because since maybe one month I could start the virtual machines without problems.
I've installed the virtual machines with virt-manager and the configuration files are like this:
By the way, we cannot fix every software bug in the Linux world.
Also, if it was a Garuda Linux problem, we would miss your inxi -Faz, it’s quite tiring to have to repeat it over and over again.
I also think you also must realize there are massive differences in kernels and package versions between the static distros you listed and a rolling distro.
I am also having the same problem and also have multiple vm's running on current state Ubuntu, or Deb distro's. This is limited to Garuda and the patch process on or about Sept 26th. No fix found to date but both old (existing vms) and new fail to run. You can create but nothing will run from the GUI or CLI.
While I 100% understand that working on a rolling distro has its challenges I have not seen something break like this via update and take this long to work into the support threads and get addressed. This is an issue with Garuda and needs to be worked out within the community even if someone who starts the process bails out.
How can you possibly make such a statement with any degree of certainty. This is pure speculation on your part. Always remember, whenever you point the finger at someone else, there’s three fingers pointing back at you. Have you even checked the bug trackers on the related upstream projects (and Arch itself)? Have you tested numerous other kernels including linux-mainline and linux-next=git? Have you reported fully on your attempted fixes (as detailed in the help request template)? We are simply left guessing as to which fixes you have attempted. The required information you have provided is woefully inadequate. In addition your expectations of entitlement are way out of line for a free distro with a small dev team and an equally small group of forum support volunteers.
Neither of you have provided your system specs with an inxi -Faz output as requested multiple times. This is also explicitly spelled out as a requirement for help requests in the help request template. The help request template also explains explicitly that:
Without it, you will not receive any help from the Garuda team or your topic is likely to be closed without notice.
You ignore all the expectations on our forum, yet you expect the Garuda devs to come running to your rescue for software that has nothing to do with Garuda. Virtualization technologies are not our responsibility to maintain or bugfix. You should determine if this is a problem with your related virtualization packages by downgrading all their components to see if this corrects your issue. If the quemu or other related updates are at fault, then it is your responsibility to report this on the relevant projects upstream bug tracker.
Garuda expects users to perform their due diligence, and your performance has fallen far short in that department. Learn to do for yourselves before you expect others to do for you.
Sorry to be so blunt, but you need to put on your big boy boots and start doing more digging. There are posts out there on other forums similar to the errors you have received. Perhaps with a little more effort you can turn up something that may help.
The point I am making is the update broke a functional component of the distro you are maintaining. I am not assigning any blame its an observation of the painfully obvious. The issue very much appears to be reproducable across multiple systems all running Garuda, all failing with the same error after an update.
I am not asking for anyone to run, nor for you to fix it, simply saying that to close the ticket seems like the exact opposite of what is needed.
Its broke on your distro over more then one PC running in more then one environment. If you would like to engage to address let me know. If you simply want to stand on your soap box have at it.
Check your pacman log for the list of upgraded packages when the breakage occurred.
Downgrade all packages affecting virtualization selectively one at a time.
BTW one more post without an inxi -Faz and the thread will be locked.
The forum is not a one way street. You expect assistance, yet you do not provide requested outputs or requested information, and refuse to answer any questions put to you.
Have you performed the step I suggested to identify the cause?
So to sum things up @ atkatana:
No requested outputs provided.
No requests for information supplied.
No answers to any questions put to you.
No feedback to suggested solutions provided.
By demonstrating this type of behavior you come off as simply trolling our forum. If you actually want assistance you have a very funny way of showing it. Keep this lack of cooperation up with forum assistants and you will be burning your bridges here pretty soon with no one to blame but yourself.
Perhaps to you it seems I’m going out of my way to be a dick, but you are making this an excercize in frustration for forum assistants. You still have not answered any of the questions I put to you.
If you really want to make progress on this issue you need to sift your pacman log for the list of which updates broke your virtualized environments. As you more or less know the date this happened, it hopefully won’t be too hard to narrow down the package(s) responsible for the breakage.
Please post the list of package updates that took place when your breakage occurred.
You can then start selectively downgrading any packages related to virtualization. You are the one that needs to put in the work if you wish to see a resolution to this issue. I only ever install systems to bare metal, so I can not troubleshoot this problem for you. You will likely need to do the detective work required if you expect progress to be made with this issue.
To the OP:
You have still not provided an inxi -Faz output. Perhaps there is a commonality between your systems that can be identified as a factor with this information. I have read posts on the Arch forum where specific hardware was the cause of virtualization breakages in the past. We can’t possibly determine if hardware could be a factor without your hardware specs.
I have asked several times now:
Neither of you have seen fit to answer to this query. You seriously can’t expect us to help find a solution if we need to keep guessing at everything. Getting answers from both of you is akin to pulling teeth.
I ask again:
Have you tested numerous other kernels including linux-mainline and linux-next-git?
Please start responding to questions put to you if you wish to receive assistance.
I have not to date tried any kernel updates, role backs etc. In my experience so far if something I have not touched breaks it will generally resolve via the patch process. I had a GRUB issue that had to be fixed and a couple of file system issues that were self inflicted so in general I try to support the patch/update process. My VM's are not in general critical on a day to day, but they are needed at times and there is data or work effort on those vms so I need to get this resolved to support my work.
I do know the date range ~25-9-21 to 27-9-21 and will go get the logs and post them. I do understand the process and want to support getting this fixed for all.
Go through your BIOS settings to see if there are any settings that affect virtualization that can be changed.
Check if your BIOS has an update available.
There is another option rather than downgrading all packages related to virtualization that were upgraded at the breakage. Look to see if the affected packages have a newer developmental git version that can be installed in preference to downgrading.
Ok so the most likely transactions that I can find are the linux kernel and the linux firmware. Both update on 27-9-21. The only other transaction that day that would be in the mix is the linux-zen kernel and the linux-zen headers. All the kernels go from 5.14.7 to 5.14.8
Nothing else in the mix looks like it would have anyhitng to do with KVM, Virtualization ect. The update to openssh for instance is not imho likely to be a problem here.
On the BIOS question I am up to date with HP BIOS for the box. The VTx settings are more or less on/off and are set to on. No other updates or changes have been made to the system or BIOS and as noted to date I have not changed the Kernel or any other external settings.