Ive been running a gaming VM in proxmox for months now and have really liked it. My base is am old Lenovo ThinkStation P500. Using an old GTX 1070 which is more than enough for what I play. (Minecraft, Factorio, Cities, etc)
I’ve tried doing that too but the delay was far too much, around 3-4 seconds and the bitrate was so low it looked like a 480p stream. How did you set your proxmox?
It sounds like you didn’t set up gpu passthrough
I passed through my GPU and the SSD to the VM directly. Performance has been solid for me. With 8 cores, 32GB RAM assigned and a GTX 1070. Monitors are plugged directly into the card with the display setting in proxmox set to none since I don’t use the console screen in proxmox for the Windows VM.
That’s awesome! I’ve been debating trying a Linux VM for gaming to avoid Windows altogether. Are you using Windows as your VM?
Yes using windows 11 VM.
For those wondering, it also works with a Linux VM:
- Host: AMD Ryzen 9 3900X + Proxmox
- PCI passthrough for an Nvidia RTX 3060 12GB
- A Debian VM with 16GB and as many cores as the host have (if you set less cores, you will have to tune cpu affinity/pinning)
- An HDMI dummy
- I stream the VM to my office using Sunshine and Moonlight
It’s not easy to set up, but it works. I’m able to run some games like Borderlands 3 running at ~50FPS with a resolution of 1920x1080 with visual effects set to the max (important: disable vsync in the games !).
Only problem is disk access. It tends to add some latency. So with games coded with their ass (ex: Raft), the framerate drops a lot (Raft goes down to 20FPS sometimes).
I have done streaming too but I used parsec.
Am I missing something here? Why use a VM for gaming?
I’ll take Linux with proton any day over all that faffing with windows and a GPU pass through.
You are - this is a server - it hosts approximately 20 LXC containers beside a couple of VMs. One of the VMs hosts Windows - and gets a GPU and a couple of USB ports. Another VM hosts Linux, which in turn runs Home Assistant, and gets a USB port so that it can use my Zigbee dongle, etc.
I could feasibly use a Linux VM instead, but I’d have to do the same VM passthrough chicanery - and the way I have this set up right now means that I do not treat the gaming workload as anything special, it’s just another VM. I can snapshot it, move it between storage devices, share hardware between it and other VMs, and so on.
Oh also, the second GPU that my machine has (an intel iGPU) doesn’t go to waste either! That gets passed through to yet another VM, which hosts Jellyfin - and it makes use of the iGPU component of the CPU to do video transcoding. Virtualising workloads like this is far nicer to manage than for example just having a linux box with all these services running on it. What if the game crashes? In the VM world, I just restart the VM. What if one of the other services shits the bed and starts writing logs frantically (as has happened to be recently). It’s filled the disk, and suddenly I can’t game! In the VM world, the service gets its own portion of disk space and therefore can’t eat it all up. You could feasibly solve all these problems with the setup you describe, but why, when virtualisation has such a small penalty performance wise and comes with a bunch of other benefits for free?
I usually shy away from VMs because I have to dedicate a fixed amount of resources, e.g. ram.
I tend to rely on docker or bare metal services on a server. But I don’t use a server for gaming.
I do this using a native Linux host so Google corals work right for my frigate security system. Windows vm is a qemu KVM with GPU passthrough, managed primarily through a web browser via cockpit https://qqq.ninja/blog/post/vfio-on-arch/
I like this idea. I’ve been self hosting using A VPS cloud server, but this is a great reason to do something in my home.
I did the Lightning BIOS flash successfully to ensure virtualization is working on my 11900ES. Goal is to build an unraid server which permanently runs two dozens docker containers and several VMs for daily usage. ATM this setup is running on a 10 year old HP Z420 Xeon workstation, but I want to go for something less power hungry with the Erying setup.
My only problem is that since flashing the lightning BIOS both system fan connectors always run full tilt, no matter what kind of “Smart Control” settings I apply. I tried several different 4 and 3 pin fans, with and w/o PWM, no difference. Hard resetting the BIOS also doesn’t help. This did not occur with the stock BIOS. Maybe anyone experienced the same and could give me a hint what to do.
I experience exactly that you describe except for on the cpu fan header, that one is controllable as long as a 4 pin fan is used
deleted by creator
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LXC Linux Containers SSD Solid State Drive mass storage VPS Virtual Private Server (opposed to shared hosting) Zigbee Wireless mesh network for low-power devices
4 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.
[Thread #115 for this sub, first seen 7th Sep 2023, 04:35] [FAQ] [Full list] [Contact] [Source code]