Technical GuideFebruary 13, 202618 min read

    Proxmox GPU, USB and PCI Passthrough: The Complete IOMMU / VFIO Guide

    Assign an NVIDIA or AMD GPU, a USB controller, a network card or any PCI device directly to a Proxmox VM — with near bare-metal performance. This guide covers IOMMU configuration, VFIO, common pitfalls and real-world use cases.

    Hardware Passthrough: Giving Direct Access to Hardware

    By default, Proxmox virtualizes hardware: each VM receives emulated devices (virtio network card, virtual GPU, virtual SCSI controller). This is sufficient for most workloads — but not all.

    Hardware passthrough (or PCI passthrough) allows you to assign a physical device directly to a VM, bypassing emulation. The VM sees the hardware as if it were running bare-metal, with native drivers and native performance.

    Use cases are numerous: GPU passthrough for gaming, AI/ML compute or VDI, USB passthrough for home automation dongles (Zigbee, Z-Wave) or license keys, PCI passthrough for high-performance network cards, RAID controllers or capture cards.

    Emulation vs Passthrough

    Emulation (default)

    VM → Virtual driver → QEMU → Host → Hardware
    Emulation overhead (host CPU utilized)
    Hardware shareable between VMs
    Live migration possible

    Passthrough (VFIO)

    VM → Native driver → Hardware (direct)
    Near bare-metal performance
    Hardware dedicated to a single VM
    Live migration impossible (device attached)

    Passthrough uses IOMMU (Intel VT-d / AMD-Vi) to map the device's DMA memory directly into the VM's memory space, without going through the host.

    Hardware and BIOS Prerequisites

    Passthrough does not work on just any hardware. Here is what you need to check before getting started:

    Prerequisites Checklist

    • CPU with IOMMU: Intel VT-d (most Core CPUs since 4th gen, all Xeon) or AMD-Vi (all Ryzen, EPYC)
    • IOMMU enabled in BIOS: usually found under "VT-d" (Intel) or "IOMMU" / "SVM" (AMD) in advanced BIOS settings
    • Isolated IOMMU groups: each PCIe slot should ideally be in its own IOMMU group — server motherboards (Supermicro, Dell, HPE) handle this well, some consumer boards group everything together
    • Secondary GPU for passthrough: the Proxmox host needs a display (even minimal) — do not pass through the primary GPU used for the console
    • UEFI BIOS (OVMF) for the VM: GPU passthrough works much better with UEFI than with SeaBIOS, especially for Windows

    Watch Out for IOMMU Groups

    PCI passthrough works by IOMMU group: all devices in the same group must be passed together to the same VM (or all remain on the host). On some consumer motherboards, all PCIe slots share the same group — making selective passthrough impossible. Check your groups with find /sys/kernel/iommu_groups/ -type l before getting started. The ACS override patch can help, but it reduces IOMMU security.

    Step-by-Step IOMMU Configuration

    The first step, common to all types of passthrough (GPU, USB, PCI), is to enable IOMMU at the Linux kernel level on the Proxmox host.

    Step 1: Bootloader Parameters

    The iommu=pt (passthrough mode) parameter tells the kernel not to translate DMA addresses for host devices — which improves host network and disk performance while enabling IOMMU for VMs.

    Step 2: Load VFIO Modules

    The principle: you blacklist GPU drivers on the host so the GPU is not initialized at boot, then reserve it via VFIO so it is available exclusively for passthrough to VMs.

    Step 3: Verify IOMMU

    # Verify that IOMMU is active
    dmesg | grep -e DMAR -e IOMMU
    # Should display: "DMAR: IOMMU enabled" or "AMD-Vi: IOMMU enabled"
    
    # List IOMMU groups and their devices
    find /sys/kernel/iommu_groups/ -type l | sort -V
    
    # Handy script to visualize groups:
    for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
      echo "IOMMU Group $(basename $g):"
      for d in $g/devices/*; do
        echo -e "\t$(lspci -nns $(basename $d))"
      done
    done

    GPU Passthrough: NVIDIA and AMD

    GPU passthrough is the most sought-after use case. It delivers native graphics performance inside a VM — gaming, 3D rendering, video transcoding, AI/ML compute (CUDA, ROCm).

    Gaming / VDI

    • Windows VM with dedicated GPU
    • Native DirectX / Vulkan performance
    • Looking Glass for display
    • Ideal for gaming homelab

    AI / ML / Compute

    • CUDA / cuDNN / TensorRT (NVIDIA)
    • ROCm (AMD)
    • Training and inference in VM
    • GPU workload isolation

    Video Transcoding

    • NVENC / NVDEC (NVIDIA)
    • VCE / VCN (AMD)
    • Plex, Jellyfin hardware transcode
    • Streaming and video editing

    VDI / Virtual Desktops

    • GPU-accelerated virtual desktops
    • CAD/CAM (SolidWorks, AutoCAD)
    • vGPU with NVIDIA A-series/L-series
    • Multi-user with SR-IOV

    VM Configuration for GPU Passthrough

    # Create a VM with UEFI (OVMF) — required for GPU passthrough
    qm create 100 --name gpu-vm --memory 16384 --cores 8 \
      --bios ovmf --efidisk0 local-lvm:1,format=raw,efitype=4m,pre-enrolled-keys=1 \
      --machine q35 --cpu host,hidden=1 \
      --net0 virtio,bridge=vmbr0
    
    # Attach the GPU (replace 01:00 with the actual PCI address)
    qm set 100 --hostpci0 01:00,pcie=1,rombar=1,x-vga=1
    
    # For NVIDIA (avoid error code 43):
    qm set 100 --args '-cpu host,kvm=off'
    
    # Attach GPU + audio from the same IOMMU group:
    # qm set 100 --hostpci0 01:00,pcie=1,rombar=1,x-vga=1

    Key Parameters Explained

    ParameterRole
    machine: q35Modern chipset with native PCIe support (vs i440fx which emulates PCI)
    cpu: host,hidden=1Exposes the real CPU to the VM and hides virtualization (required for NVIDIA)
    bios: ovmfUEFI firmware — required for GOP (Graphics Output Protocol) on modern GPUs
    pcie=1Exposes the device in PCIe mode (vs PCI legacy), required for recent GPUs
    rombar=1Loads the GPU's optional ROM (VBIOS firmware) — required for initialization
    x-vga=1Marks this GPU as the VM's primary display

    NVIDIA

    • Works very well once configured
    • Requires hidden=1 / kvm=off (VM detection bypass)
    • CUDA, NVENC, cuDNN — dominant AI/ML ecosystem
    • vGPU available (A/L series with license)

    AMD

    • Easier to configure (no VM detection)
    • Reset bug fixed on recent GPUs (RDNA2+)
    • ROCm for compute (growing support)
    • Legacy "reset bug" on Polaris/Vega (workaround with vendor-reset)

    USB Passthrough

    USB passthrough is simpler than GPU passthrough and supports hot-plug — you can attach and detach USB devices to a running VM.

    Two USB Passthrough Methods

    By Device (USB ID)

    Passes a specific USB device, identified by its vendor:product ID. The device can be on any USB port.

    # Find the USB ID lsusb # Bus 001 Device 005: ID 1a86:7523 ... # Attach to the VM qm set 100 --usb0 host=1a86:7523

    By USB Controller (PCI)

    Passes an entire USB controller (with all its ports) via PCI passthrough. All ports on that controller are then in the VM.

    # Find the USB controller lspci | grep USB # 00:14.0 USB controller: Intel ... # Pass the entire controller qm set 100 --hostpci1 00:14.0

    Common use cases: Zigbee/Z-Wave dongles (Home Assistant, Jeedom), USB license keys (HASP, CodeMeter), card readers, printers, scanners, GPS/GNSS receivers, RS-232/RS-485 serial adapters for industrial applications.

    Generic PCI Passthrough

    Beyond GPU and USB, any PCI/PCIe device can be passed through to a VM. The most common use cases:

    DeviceUse CaseDifficulty
    Network Card (NIC)Dedicated 10/25/100 GbE networking, DPDK, pfSense/OPNsenseEasy
    HBA / RAID ControllerDirect disk access (TrueNAS, ZFS in VM)Easy
    GPU (detailed above)Gaming, AI/ML, transcoding, VDIModerate
    Video Capture CardHDMI/SDI capture, streaming, video surveillanceEasy
    Sound Card / Pro AudioAudio production, low latencyEasy
    Serial / Industrial CardSCADA, PLCs, industrial controlEasy
    Intel QSV (iGPU)Hardware transcoding with Intel iGPU (GVT-g)Advanced

    vGPU and SR-IOV: Sharing a GPU Between VMs

    Standard passthrough assigns an entire GPU to a single VM. To share a GPU between multiple VMs, two technologies exist:

    NVIDIA vGPU (mediated passthrough)

    • GPU shared between N VMs (configurable profiles)
    • A series (datacenter), L series (inference)
    • NVIDIA vGPU license required (expensive)
    • Specific vGPU drivers (not gaming drivers)

    SR-IOV (Intel / AMD)

    • Native hardware sharing (Virtual Functions)
    • Intel Flex (Arctic Sound), SR-IOV network cards
    • No additional license for Intel
    • Limited hardware support (specific GPUs)

    In practice: vGPU and SR-IOV are mainly relevant for multi-user VDI environments and shared compute clusters. For a homelab or SMB infrastructure with 1-2 GPUs, standard passthrough (1 GPU = 1 VM) is simpler and more performant.

    Troubleshooting: Common Issues

    NVIDIA Code 43 (Windows)

    The NVIDIA driver detects the VM and refuses to work. Solution: add cpu: host,hidden=1 in the VM config and use the q35 chipset with OVMF (UEFI). Since the 5xx+ drivers (2024), NVIDIA has removed this restriction on GeForce GPUs.

    AMD Reset Bug

    On AMD Polaris and Vega GPUs, the GPU does not properly reinitialize after a VM reboot — you need to reboot the entire host. Solution: use the vendor-reset module or switch to an RDNA2+ GPU (RX 6000/7000/9000) which does not have this issue.

    Black Screen After GPU Passthrough

    The GPU does not initialize in the VM. Check: OVMF BIOS (not SeaBIOS), rombar=1 enabled, x-vga=1 for display. If the passed GPU is the only one in the system, the host must use another display (iGPU, serial console, or headless).

    Shared IOMMU Group

    The GPU shares its IOMMU group with other devices (PCI bridge, audio, etc.). Solutions: pass all devices in the group together, change PCIe slot, or enable the ACS override patch (pcie_acs_override=downstream,multifunction) — with the caveat that this reduces IOMMU isolation.

    BAR Memory Too Large (error: not enough IO)

    Modern GPUs (RTX 3090/4090, RX 7900 XT) have very large BARs (> 256 MB). Solution: enable Resizable BAR (ReBAR) in the host BIOS and add x-pci-vendor-id or let the q35 chipset handle 64-bit addresses.

    Limitations and Trade-offs

    AspectEmulation (virtio)Passthrough (VFIO)
    Performance70-95% of native~99% of native
    Live migrationYesNo (physical device attached)
    Sharing between VMsYesNo (1 device = 1 VM)
    Snapshot/backupNormalVM must be powered off
    ComplexityNoneIOMMU, VFIO, groups, drivers
    PortabilityVM freely movableTied to physical hardware

    The Fundamental Trade-off

    Passthrough delivers maximum performance but reduces flexibility: no live migration, no hot snapshots, VM tied to physical hardware. Use it only when emulation is not enough (GPU compute, gaming, specialized peripherals). For standard networking and storage, virtio drivers offer excellent performance without these constraints.

    Our Approach at RDEM Systems

    At RDEM Systems, we use passthrough selectively:

    Passthrough in Our Infrastructure

    Dedicated Network Cards

    10 GbE NIC passthrough for VMs that require maximum network performance or direct access to the physical network (firewalls, virtual routers).

    HBA Controllers for Storage

    Passthrough of SAS/SATA controllers in HBA mode for storage VMs (TrueNAS, ZFS) — direct disk access without storage virtualization overhead.

    Selective Approach

    We default to virtio drivers (networking, storage) to preserve live migration and flexibility. Passthrough is only used when emulated performance is not sufficient — which is rare with modern virtio.

    For a turnkey deployment, discover our dedicated Proxmox server plans with GPU passthrough : we configure IOMMU, VFIO and drivers for your specialized workloads.

    Best Practices

    Proxmox Passthrough Checklist

    • Check IOMMU groups before purchasing hardware — a motherboard with poorly isolated groups makes selective passthrough impossible
    • Always use OVMF (UEFI) for GPU passthrough — SeaBIOS does not support GOP on modern GPUs
    • q35 chipset for VMs with PCI passthrough — the i440fx chipset emulates legacy PCI and causes issues with PCIe GPUs
    • Blacklist GPU drivers on the host (nouveau, nvidia, amdgpu) to prevent the host from initializing the GPU before VFIO
    • Keep host access: IPMI/iDRAC/iLO, or a second GPU/iGPU for the Proxmox console — if the GPU passed to the VM is the only one, you lose host display
    • Schedule backups with VM powered off for VMs with passthrough — hot snapshots do not capture the physical device state
    • Test VM reboot before going to production — some GPUs (AMD Polaris/Vega) do not reinitialize properly

    Frequently Asked Questions

    Official Documentation

    To dive deeper into the concepts covered in this article:

    Related Articles

    Need GPU Passthrough or Custom Proxmox Infrastructure?

    We deploy Proxmox clusters with GPU, USB and PCI passthrough for your specialized workloads — AI/ML, VDI, transcoding. IOMMU, VFIO configuration and optimization included.