Hello all-- thought I'd pass through (pun intended!) my experience so far getting up a multiheaded ESXi box on an AMD Threadripper platform.
Hardware:
ASRock Professional Gaming X399 motherboard
AMD Threadripper 1950x
64GB DDR4 RAM
1TB nvme m.2 drive
NVidia 1080 FE
Software:
ESXi 6.7
Windows 10 64 bit client
Steps that have worked so far:
I can get a vm with GPU passthrough working. In order to do this, I had to:
Upgrade the BIOS to the beta version (AGESA update)-- otherwise VM won't power up.
The mystical hypervisor.cpuid in the vm configuration, to avoid error 43 in the NVIDIA drivers in windows.
Edit passthru.map (I'm using d3d0, not bridge; others have used link).(On my former build not doing this yields complete host hang when restarting a VM.)
Turned on all IOMMU options, ACP , SVM in BIOS.
VM must use BIOS, otherwise hangs at Windows loading if USB passthrough is enabled.
The board has 3 USB controllers that I can see. Each appears associated with two other devices, a "nonessential device" and a platform security processor in addition to the controller itself.
I can't get the aquantia 10GB ethernet to pass through, not that I've tried very hard. It seems to get stuck at "enabled but needs to reboot" despite infinite reboots.
Current problem:
I can pass through a USB controller and it works fine. If I then shut down or reboot the VM, it hangs on windows logo when starting again. From a cold boot, no problems.
Things that haven't worked:
Changing PCIE switch mode in BIOS to Gen 2 instead of Auto.
Idea: Perhaps the USB controllers on this board (AMD Family 17H USB 3.0 controller?) also need to use d3d0 rebooting? Seems strange, but the bug is reminiscent of what the consumer NVIDIA boards do when not switched to bridge mode.
Hope my experience has been helpful to others and any advice would be much appreciated! Thanks "LT"