I am also having the same problem and also have multiple vm's running on current state Ubuntu, or Deb distro's. This is limited to Garuda and the patch process on or about Sept 26th. No fix found to date but both old (existing vms) and new fail to run. You can create but nothing will run from the GUI or CLI.
While I 100% understand that working on a rolling distro has its challenges I have not seen something break like this via update and take this long to work into the support threads and get addressed. This is an issue with Garuda and needs to be worked out within the community even if someone who starts the process bails out.
How can you possibly make such a statement with any degree of certainty. This is pure speculation on your part. Always remember, whenever you point the finger at someone else, there's three fingers pointing back at you. Have you even checked the bug trackers on the related upstream projects (and Arch itself)? Have you tested numerous other kernels including linux-mainline and linux-next=git? Have you reported fully on your attempted fixes (as detailed in the help request template)? We are simply left guessing as to which fixes you have attempted. The required information you have provided is woefully inadequate. In addition your expectations of entitlement are way out of line for a free distro with a small dev team and an equally small group of forum support volunteers.
Neither of you have provided your system specs with an inxi -Faz output as requested multiple times. This is also explicitly spelled out as a requirement for help requests in the help request template. The help request template also explains explicitly that:
Without it, you will not receive any help from the Garuda team or your topic is likely to be closed without notice.
You ignore all the expectations on our forum, yet you expect the Garuda devs to come running to your rescue for software that has nothing to do with Garuda. Virtualization technologies are not our responsibility to maintain or bugfix. You should determine if this is a problem with your related virtualization packages by downgrading all their components to see if this corrects your issue. If the quemu or other related updates are at fault, then it is your responsibility to report this on the relevant projects upstream bug tracker.
Garuda expects users to perform their due diligence, and your performance has fallen far short in that department. Learn to do for yourselves before you expect others to do for you.
Sorry to be so blunt, but you need to put on your big boy boots and start doing more digging. There are posts out there on other forums similar to the errors you have received. Perhaps with a little more effort you can turn up something that may help.
The point I am making is the update broke a functional component of the distro you are maintaining. I am not assigning any blame its an observation of the painfully obvious. The issue very much appears to be reproducable across multiple systems all running Garuda, all failing with the same error after an update.
I am not asking for anyone to run, nor for you to fix it, simply saying that to close the ticket seems like the exact opposite of what is needed.
Its broke on your distro over more then one PC running in more then one environment. If you would like to engage to address let me know. If you simply want to stand on your soap box have at it.
Check your pacman log for the list of upgraded packages when the breakage occurred.
Downgrade all packages affecting virtualization selectively one at a time.
BTW one more post without an inxi -Faz and the thread will be locked.
The forum is not a one way street. You expect assistance, yet you do not provide requested outputs or requested information, and refuse to answer any questions put to you.
Have you performed the step I suggested to identify the cause?
So to sum things up @ atkatana:
No requested outputs provided.
No requests for information supplied.
No answers to any questions put to you.
No feedback to suggested solutions provided.
By demonstrating this type of behavior you come off as simply trolling our forum. If you actually want assistance you have a very funny way of showing it. Keep this lack of cooperation up with forum assistants and you will be burning your bridges here pretty soon with no one to blame but yourself.
Perhaps to you it seems I'm going out of my way to be a dick, but you are making this an excercize in frustration for forum assistants. You still have not answered any of the questions I put to you.
If you really want to make progress on this issue you need to sift your pacman log for the list of which updates broke your virtualized environments. As you more or less know the date this happened, it hopefully won't be too hard to narrow down the package(s) responsible for the breakage.
Please post the list of package updates that took place when your breakage occurred.
You can then start selectively downgrading any packages related to virtualization. You are the one that needs to put in the work if you wish to see a resolution to this issue. I only ever install systems to bare metal, so I can not troubleshoot this problem for you. You will likely need to do the detective work required if you expect progress to be made with this issue.
To the OP:
You have still not provided an inxi -Faz output. Perhaps there is a commonality between your systems that can be identified as a factor with this information. I have read posts on the Arch forum where specific hardware was the cause of virtualization breakages in the past. We can't possibly determine if hardware could be a factor without your hardware specs.
I have asked several times now:
Neither of you have seen fit to answer to this query. You seriously can't expect us to help find a solution if we need to keep guessing at everything. Getting answers from both of you is akin to pulling teeth.
I ask again:
Have you tested numerous other kernels including linux-mainline and linux-next-git?
Please start responding to questions put to you if you wish to receive assistance.
I have not to date tried any kernel updates, role backs etc. In my experience so far if something I have not touched breaks it will generally resolve via the patch process. I had a GRUB issue that had to be fixed and a couple of file system issues that were self inflicted so in general I try to support the patch/update process. My VM's are not in general critical on a day to day, but they are needed at times and there is data or work effort on those vms so I need to get this resolved to support my work.
I do know the date range ~25-9-21 to 27-9-21 and will go get the logs and post them. I do understand the process and want to support getting this fixed for all.
Go through your BIOS settings to see if there are any settings that affect virtualization that can be changed.
Check if your BIOS has an update available.
There is another option rather than downgrading all packages related to virtualization that were upgraded at the breakage. Look to see if the affected packages have a newer developmental git version that can be installed in preference to downgrading.
Ok so the most likely transactions that I can find are the linux kernel and the linux firmware. Both update on 27-9-21. The only other transaction that day that would be in the mix is the linux-zen kernel and the linux-zen headers. All the kernels go from 5.14.7 to 5.14.8
Nothing else in the mix looks like it would have anyhitng to do with KVM, Virtualization ect. The update to openssh for instance is not imho likely to be a problem here.
On the BIOS question I am up to date with HP BIOS for the box. The VTx settings are more or less on/off and are set to on. No other updates or changes have been made to the system or BIOS and as noted to date I have not changed the Kernel or any other external settings.
I've been experiencing the exact same issues for about a week or so (I hadn't tried to start any VMs for about 2 weeks before that).
This does coincide with the release of libvirt 7.8.0, I did see somewhere on their mailing lists about improvements when cleaning up cgroups on shutdown. But tbh I definitely don't know enough about cgroups (let alone how libvirt wants/needs to manage them).
I tried downgrading libvirt to 7.7.0, as its the only thing I can see that has had updates in the last few weeks.
But that didn't change anything.
I installed the git versions of the following:
yay -S virtkvm-git libvirt-git qemu-arch-extra-git
Also didn't really fix anything:
virsh # start win10
error: Failed to start domain 'win10'
error: Unable to read from '/sys/fs/cgroup/machine.slice/machine-qemu\x2d5\x2dwin10.scope/libvirt/cgroup.controllers': No such file or directory
It should be noted that I usually run the xanmod kernel, but switched to mainline to test and do the git and previous version installs. So no difference as far as I can tell there.
This thread is 6 months old and is probably totally unrelated:
However, it does have some ideas not proposed before. One is to add this line to the kernel boot parameters:
grub-mkconfig -o /boot/grub/grub.cfg
The Garuda default is:
Just thought I'd mention this post as it was preventing the startup of the VM's. VM's are not my thing, so I'm just throwing this out there on the remote chance it might help. I guess it could be a possibility that something recently has become incompatible with the change to cgroup v2.
Ubuntu is/was one of the last holdouts to switch to cgroups version 2. I read a post from this Aug saying that they were just about to switch. They were apparently waiting until they had full compatibility with snaps before switching. Most distros are using version 2 now I believe.
Garuda has been using it for quite some time, a dev would know the exact date of implementation.
Glad to hear that fixed some of you with this issue up.
Confirmed this will solve the problem with VM's not booting. I would point out that for those that want or need the help, the setting can be found in the Garuda assistant under boot options. There you can edit the value of systemd.unified_cgroup_hierarchy=1 (i.e. change it to 0 and save)
I would then uncheck the cgroup v2 compatibility option on the same page. Save all changed, and reboot.
There is no need to use the UI, but given this is one of the best looking distro out there for those that use the GUI you have the option to address quickly .
Thanks to all .... always good to find a solution that helps everyone