Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Inspired by Linus Tech Tips and bummed that the new threadrippers were out of my budget, I decided to build a 4 gamers 1 CPU box based on a Ryzen 3900X. I used the Ryzen 3900X (12 core) and a x570 Taichi, and then 4 GPUs (in the Nivida 1060 to 1660 range) (I had to use one of these: https://www.amazon.com/gp/product/B07YDH8KW9 to connect my 4th GPU to one of the m.2 slots) My total cost was somewhere in the $2400 range.

I personal think it's a super awesome setup, but it's _definitely not_ for the faint of heart. You have to _really enjoy_ debugging crazy shit for it to be worth it.

But even at my "budget" $2400 range, a $20/month for 4 users of some game streaming service, would buy me about _ten years_ for that price.

The economics of a streaming service are really quite killer.



Which OS do you use as a host? What I really dislike about my setup is the dedicated gpu I need just for the BIOS to post.

//Edit: Using a Ryzen 5 2600 and a Gigabyte X470 Auros Ultra Gaming


I'm using Unraid, (a linux), which uses kvm for the virtualization. If you're using kvm, you can passthrough your primary GPU by dumping the vbios, and then passing that along when you initiate the passthrough. Passing a custom vbios is pretty easy to do in Unraid, though dumping the vbios is still a manual process. I have do that in my setup because I don't have any spare slots for another GPU, even a tiny one.


Thanks! That leaves me thinking, though. I'm also using Unraid, and the main gpu is exclusively used for my vm passthrough (it's a Radeon). I thought that the bios won't free it once it has been claimed by the bios for passthrough, hence the Geforce GT710 for the bios. If I could free that, I could host another gaming setup.


You can definitely run another gaming setup through that. Here's what you need to do:

- Follow the instructions in this video for getting a dump of your vbios: https://www.youtube.com/watch?v=mM7ntkiUoPk (you can stop once you've gotten the vbios)

- Make sure your Unraid is updated to at least 6.7

- Read the "New vfio-bind method" section of: https://forums.unraid.net/topic/80001-unraid-os-version-67-a...

- Use the knowledge gained from that to add an IOMMU group assigned to your GT710 to /boot/config/vfio-pci.cfg

- Reboot your Unraid server

- Do the normal gpu passthrough thing for the GT710 for a VM, but add the dumped vbios to the "Graphics ROM BIOS:" field in the VM "edit" gui

Hopefully it should work :D

One thing to keep in mind about the vfio-pci.cfg file is that it's effectively a blacklist, and if you do something that could change your IOMMU group assignments (such as adding or removing a PCI device) you could end up inadvertently blacklisting a PCI device you don't intend to. All you need to do is update the IOMMU groups in vfio-pci.cfg to fix it, but it can freak you out if you're not expecting it.

(For example if I remove one of my GPUs, one of my SATA controllers will inevitably end up getting the IOMMU group that _used_ to belong to a GPU, so it'll get blacklisted, and two of my array drives will appear missing until I update the vfio-pci.cfg to match the new IOMMU groups)


Thanks a lot for the write-up! Once I got a tad more time at hand, I'll tinker around a bit. :)


I'm using Linux and configured the kernel to basically entirely disown the GPU from the very start (blacklisted kernel modules, disabled framebuffer). After that, I'm able to passthrough the GPU to a VM.


Thanks for the hint - I always thought it's the bios grabbing the gpu, and not releasing it again for the vm. I might need to read into some stuff.


Is your complete build posted somewhere (for example pcpartpicker) ? Thanks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: