Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
QEMU 8.0 (qemu.org)
295 points by neustradamus on April 24, 2023 | hide | past | favorite | 33 comments


Cool. I've actually been playing with QEMU internals a lot recently. Specifically with the multi-process experimental features. Although I can't seem to find any consistency on where the main project is headed. They admit that the documentation pages can be well out-of-date with the upstream implementations, but they seem split-brained even within the code.

The main project ships with the multi-process qemu approach, mostly defined in their docs: https://www.qemu.org/docs/master/system/multi-process.html https://www.qemu.org/docs/master/devel/multi-process.html

But I saw some update buried in a mailing list that development of the multi-process system has been superseded by vfio-user, mostly led by nutanix: https://github.com/nutanix/libvfio-user

The nutanix repo refers to an oracle-led fork of qemu with the full vfio-user implementation built-in: https://github.com/oracle/qemu

So, they're still separate projects, right? Well, kinda. the mainline project has the vfio-user-server implementation merged in: https://github.com/qemu/qemu/blob/ac5f7bf8e208cd7893dbb1a952...

But not the client side (vfio-user-pci). So, the feature is half-baked in the mainline project.

I don't know if any of the qemu devs browse HN but it would be nice to hear more about the plans for vfio-user.


"Since version 3.0.0, QEMU uses a time based version numbering scheme: major incremented by 1 for the first release of the year minor reset to 0 with every major increment, otherwise incremented by 1 for each release from git master micro always 0 for releases from git master, incremented by 1 for each stable branch release" If I'm reading this right, "master" has more significant part than stable... This feels wrong. What am I missing?


Next release from 8.0 branch will be 8.0.1 even if it's released in 2024.


> x86: support for Xen guests under KVM with Linux v5.12+

Clearly I haven't been keeping up, because this is a bit of a surprise. Xen under KVM? Those things are polar opposites!


Support for Xen guests by supporting the Xen hypercalls. This was added to KVM proper to support Amazon's proprietary vmm running ancient AMIs on newer systems but it seems it's finally getting qemu support too.


I haven't looked at this work in detail, but you need more than just hypercalls to run those ancient guests; those guests are expecting to see xenstore, disk and block backends, and so on. Those would be a lot harder to emulate in the kernel (and probably a bad idea to try anyway).


I mean, sure, this specific work is in qemu, not the kernel. Some hypercalls are trapped by the kernel, some trap out to a vmm like qemu. Support for pv devices mainly lives in the vmm. But A) even the hypercalls the kernel handles needs to be manually enabled by the vmm, and B) the pv device support ultimately is exposed as in guest memory tables and hypercalls (now partially emulated by qemu).


Xen is kind of dead so people are migrating to KVM but they don't want to update their images.


People have been saying "Xen is dead" for over 15 years; and yet here we are. :-)

Xen and KVM are different beasts with different advantages. (Some of Xen's I detailed in a comment here [1].) KVM in particular has some advantages in the "vanilla server virt" space, which is why KVM often ends up being the default in the Linux distro world. But we're not going away any time soon.

[1] https://news.ycombinator.com/item?id=32607837


I used Xen a lot in late 2000s, when it was a great alternative to spinning up more physical machines. Had a lot of fun with the office servers, running all disks with RAID some-number-or-other on a machine with an NBD[0] server on gigabit Ethernet, and then a couple of physical machines hosting all the XEN machines.

XEN felt fast and, though not exactly easy to setup, fairly straight-forward.

0: https://en.wikipedia.org/wiki/Network_block_device


Since when is Xen dead? I liked the design better than kvm.


Don't know but it's a shame. IIRC Qubes uses Xen, because Xen has a small Trusted Computing Base (TCB).


Xen has a smaller TCB only if you consider Dom0 not part of the TCB. But since a compromised Dom0 would lead to the compromise of the whole system, I think it makes more sense to view Dom0 as part of the TCB.


Doesn't AWS run on a modified Xen setup?


They used to, but they run mainly on KVM now with a custom vmm.


First web hit I get on that topic is

https://www.freecodecamp.org/news/aws-just-announced-a-move-...

(I read only the headline and the date now. From Xen towards KVM starting in 2019. So I guess at AWS scale that's still a relatively new thing.)


They started the transition back in 2017. https://www.theregister.com/2017/11/07/aws_writes_new_kvm_ba...


Calling

   systemd-detect-virt
on the 2 first AWS machines of different type I have access to: Result is

   xen
on one of them and

   amazon
on the other.


This qemu support comes after Amazon added support to the kernel to emulate Xen when combined with their custom vmm. A key component of that is the hypervisor lying when it's asked by a guest to describe itself.

There's a good chance that 'xen' box is actually KVM and Amazon's proprietary vmm.


Sounds like they developed a `user-agent` situation.

Apparently, one must never ask popular software what it is.


You either die a hero or live long enough to see yourself become the villain (ie. last vestiges languishing as a compat shim that even the developers who support you wish they could finally deprecate and remove).


You're using very obsolete instances.


They are neither brand new nor 10+ years old legacy types. Somewhere in the middle.


I always looked at Xen as something similar to a plain VM image with lots and lots of virtio-like stuff. I'd even assume they'd perform better if they are a thinner layer of virtual machine on top of the host.


My interpretation of this:

You have an x86 machine running Linux v5.12 which is running QEMU using KVM accelerations. Using this setup, you're able to emulate a machine that can run Xen. Since Xen is a type 1 hypervisor, you would then run a Dom0 and perhaps many DomU machines inside it, let's say Linux here for both.

Hardware > Linux KVM > QEMU > Xen guest > Linux Dom0 guest, Linux DomU guest


My understanding is that this work is designed to run DomU images more or less directly. Qemu itself more or less fills the role of Dom0 here.


That makes more sense.


I hope so because it might finally allow me to run Qubes OS as a vm to check it out.


That's nested virtualization and it's completely unrelated.


experimental VFIO migration? I guess this will make easier to migrate a VM with GPU workload from host to host without interrupting it


Are there GPUs that allow enough internal state to be dumped / reloaded for a migration like that to work?

(I'm asking from pure ignorance.)


You can do it in the vfio driver by just keeping the bookkeeping information around as resources are created.


I'm also ignorant about this topic, but it's certainly accomplishable. There's a (quite old) video with demonstration: https://youtu.be/jjkcPn19fcs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: