VMware to Proxmox Migration: Full Guide 2025-2026
Migrating a VMware infrastructure to Proxmox may seem complex, but with the right methodology it can be carried out securely and with minimal disruption. Here is our complete guide based on hundreds of successful migrations, updated for the post-Broadcom 2025-2026 landscape.
Post-Broadcom context: why migrations are accelerating
Since Broadcom's acquisition of VMware in late 2023, the virtualization market has undergone an unprecedented upheaval. Broadcom's new commercial policies have triggered a wave of migrations toward open-source alternatives, with Proxmox VE leading the way.
Businesses are facing several major changes:
- Price increases of 3x to 10x: VMware license renewals show increases of 200% to 1000% depending on configuration and fleet size
- Forced VCF bundles: Broadcom mandates VMware Cloud Foundation, bundling vSphere, vSAN, NSX and Aria into a single package, even if you only use some of these products
- End of perpetual licenses: forced transition to an annual subscription model, eliminating any capitalization on past investments
- Partner program terminated: small resellers and integrators are cut out, reducing options for customers
- Contractual uncertainty: terms change regularly, making budget planning difficult
For a detailed total cost of ownership analysis and achievable savings, see our TCO VMware vs Proxmox 2026 comparison.
Good to know
Based on feedback from our clients, the average annual savings after migrating to Proxmox is 70% to 95% on licensing costs. A 3-server cluster that cost EUR 25,000/year in VMware licenses drops to less than EUR 1,500/year with Proxmox (support included).
Prerequisites and preparation
Before starting the migration, thorough preparation is essential. Here are the key elements to validate.
Complete VMware inventory
Start by exhaustively documenting your VMware environment. Use RVTools or the vSphere Client to export this information:
VMware inventory checklist:
- • VMs: name, OS (exact version), vCPU, RAM, disk size (provisioned vs used), active snapshots
- • Network: VLANs, IP addresses (static/DHCP), firewall rules, load balancers, port groups
- • Storage: datastores (type, capacity, usage), retention policies, inter-site replication
- • Dependencies: application clusters, databases, AD/LDAP services, NFS/CIFS shares
- • Backup: current policies (Veeam, Commvault, etc.), retention, documented RPO/RTO
- • VMware specifics: VMs using vSAN, NSX, DRS, vGPU, RDM (Raw Device Mapping)
- • Licenses: Windows licenses tied to hardware, application licenses tied to the hypervisor
# Export inventory via PowerCLI (on a workstation with vSphere PowerCLI)
Connect-VIServer -Server vcenter.example.com
Get-VM | Select Name, PowerState, NumCpu, MemoryGB,
@{N='UsedSpaceGB';E={[math]::Round($_.UsedSpaceGB,2)}},
@{N='ProvisionedSpaceGB';E={[math]::Round($_.ProvisionedSpaceGB,2)}},
Guest, Notes | Export-Csv -Path vm-inventory.csv -NoTypeInformation
# List datastores and their usage
Get-Datastore | Select Name, Type, CapacityGB, FreeSpaceGB |
Export-Csv -Path datastores.csv -NoTypeInformation
# List networks and port groups
Get-VirtualPortGroup | Select Name, VLanId, VirtualSwitch |
Export-Csv -Path networks.csv -NoTypeInformationProxmox capacity planning
Size your Proxmox cluster based on the collected inventory:
- Storage: plan for 2x the total VM size (space for migration + growth headroom). For Ceph, plan for 3x (replication factor 3)
- RAM: keep the same total allocation + 10-15% headroom for the hypervisor and Ceph (if used)
- CPU: the vCPU:pCPU ratio can remain identical. KVM/QEMU overhead is comparable to ESXi
- Network: minimum 1 Gbps dedicated to migration, ideally 10 Gbps. Plan a migration network separate from the production network
Target Proxmox infrastructure
- Operational Proxmox VE cluster with a minimum of 3 nodes for high availability
- Network access between the VMware and Proxmox environments (direct or site-to-site VPN)
- Storage configured: Local-LVM, Ceph, ZFS or NFS/iSCSI depending on your needs
- Rollback plan documented with a tested fallback procedure
- Proxmox Backup Server (PBS) installed and configured for post-migration backups
To outsource your post-migration PBS backups, discover Proxmox PBS backup with NimbusBackup : off-site replication, encryption and deduplication included.
Important note
Identify VMs with specific VMware dependencies (vSAN, NSX, DRS, vGPU, RDM). These elements require special attention. VMs with RDM (Raw Device Mapping) must be converted to standard disks before migration. VMs using vGPU require GPU passthrough configuration on Proxmox.
Proxmox cluster preparation
Configure your Proxmox environment to host the migrated VMs.
Network configuration
Replicate the VMware network topology on Proxmox. Each VMware VLAN corresponds to a Linux bridge or a VLAN on a VLAN-aware bridge:
# /etc/network/interfaces on each Proxmox node
# Main VLAN-aware bridge (replaces the VMware vSwitch)
auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
# Dedicated bridge for Ceph storage (10 Gbps network)
auto vmbr1
iface vmbr1 inet static
address 10.10.0.10/24
bridge-ports eno2
bridge-stp off
bridge-fd 0
# Dedicated bridge for migration (10 Gbps network)
auto vmbr2
iface vmbr2 inet manual
bridge-ports eno3
bridge-stp off
bridge-fd 0
# Example tagged VLAN on vmbr0 (equivalent to VMware port group)
# VMs select the VLAN tag directly in their configurationStorage configuration
Several options depending on your needs (see the Proxmox Storage documentation):
- • Local-LVM: high-performance local storage, ideal for testing and small deployments
- • Ceph: distributed storage for high availability (replaces vSAN)
- • NFS/iSCSI: to reuse an existing SAN (NetApp, Pure, etc.)
- • ZFS: for deduplication, compression and native snapshots
# Example: create a Ceph pool to host migrated VMs pveceph pool create vmware-migration --pg_num 128 --size 3 # size 3 = replication across 3 OSDs (equivalent to a vSAN FTT=1 policy) # Add the pool as Proxmox storage pvesm add rbd ceph-migration --pool vmware-migration --content images,rootdir # Check available space ceph df
The 4 VMware to Proxmox migration methods
Proxmox offers several V2V (Virtual-to-Virtual) migration approaches. The choice depends on your context: number of VMs, downtime tolerance, available bandwidth and team expertise.
Method 1: Proxmox 8.x Import Wizard (recommended)
Since Proxmox VE 8.1, the Import Wizard allows direct import from vCenter or ESXi. This is the simplest and most reliable method for the majority of cases.
Step-by-step procedure:
- In the Proxmox interface, go to Datacenter → Storage → Add → ESXi
- Enter the vCenter or ESXi host address along with credentials (user with read permissions)
- Uncheck "Skip Certificate Verification" only if you are using a self-signed certificate
- Browse the VM tree and select the one to import
- Configure the import options:
- • Target storage: select local-lvm, ceph or ZFS
- • Disk format: qcow2 (snapshots) or raw (performance)
- • Network bridge: map each interface to the correct bridge/VLAN
- • BIOS: SeaBIOS (legacy) or OVMF (UEFI) depending on the source VM
- Check "Live Import" if the VM must remain accessible during the transfer (the source VM stays active)
- Click "Import" and follow progress in the Task Viewer
- After the import, verify the VM's hardware configuration in the Hardware tab
Import Wizard prerequisites
The Import Wizard requires the source VM to be powered off or in read-only mode to ensure data consistency (except in Live Import mode). The account used must have "Virtual machine → Interaction → Power Off" and "Datastore → Browse Datastore" permissions in vCenter.
Method 2: Manual conversion with qemu-img
For more control, offline VMs, or if the Import Wizard is not compatible (older ESXi versions, VMs with specific disks), use qemu-img.
# === STEP 1: Export from VMware === # Option A: Export as OVA from vCenter (web interface) # VM > Actions > Export OVF Template # Option B: Export the VMDK directly via SCP from the ESXi host scp root@esxi:/vmfs/volumes/datastore1/myvm/myvm-flat.vmdk /tmp/ # Option C: Use ovftool (VMware CLI) ovftool vi://root@esxi/myvm /tmp/myvm.ova # === STEP 2: Convert the disk === # Convert VMDK to qcow2 (recommended: supports snapshots) qemu-img convert -f vmdk -O qcow2 myvm-flat.vmdk myvm.qcow2 # With compression (reduces transfer time) qemu-img convert -f vmdk -O qcow2 -c myvm-flat.vmdk myvm.qcow2 # Convert to raw (best I/O performance, no qcow2 snapshots) qemu-img convert -f vmdk -O raw myvm-flat.vmdk myvm.raw # For sparse VMDKs (thin provisioning), specify the format qemu-img convert -f vmdk -O qcow2 -p myvm.vmdk myvm.qcow2 # -p shows progress # Verify the integrity of the converted file qemu-img check myvm.qcow2 qemu-img info myvm.qcow2 # === STEP 3: Create the VM on Proxmox === # Create an empty VM with the correct config qm create 100 --name myvm --memory 4096 --cores 4 \ --net0 virtio,bridge=vmbr0,tag=100 \ --ostype l26 --bios seabios --scsihw virtio-scsi-single # Import the converted disk qm importdisk 100 myvm.qcow2 local-lvm # Attach the imported disk to the VM qm set 100 --scsi0 local-lvm:vm-100-disk-0,discard=on,ssd=1 # Set the boot disk qm set 100 --boot order=scsi0 # For a UEFI VM, add an EFI disk qm set 100 --bios ovmf --efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=1
Method 3: Live V2V migration (Veeam / Zerto)
For critical environments requiring near-zero RPO and minimal downtime, use a continuous replication tool:
Live migration workflow:
- Initial replication: full VM copy to Proxmox (in the background, without interruption)
- Continuous synchronization: changes are replicated in real time via VMware's CBT (Changed Block Tracking)
- Pre-cutover: verification of the replicated VM (boot test, connectivity)
- Final cutover: source VM shutdown, last delta sync (a few seconds), start on Proxmox
- Traffic redirection: DNS update or load balancer switch
- • Veeam Backup & Replication: supports VMware to KVM/Proxmox replication via Instant VM Recovery then export
- • Zerto: continuous replication with RPO of a few seconds, automated failover orchestration
- • Carbonite Migrate: live migration with block-level synchronization, multi-hypervisor support
Actual downtime with live V2V
With properly configured continuous replication, the downtime during final cutover is typically 5 to 30 seconds per VM. This is the recommended method for databases, mail servers and any application that cannot tolerate more than a few minutes of interruption.
Method 4: Incremental CBT migration (large VMs)
For large VMs (500 GB to several TB), a full migration can take hours. The CBT (Changed Block Tracking) method significantly reduces transfer time and downtime:
CBT migration principle:
- Pass 1 (hot): full disk copy while the VM is running (can take hours)
- Pass 2 (hot): copy only blocks changed since pass 1 (CBT delta, much faster)
- Pass N (optional): repeat incremental passes to reduce the final delta
- Final pass (cold): stop the VM, last delta copy (a few minutes), start on Proxmox
# Enable CBT on the VMware VM (via PowerCLI) $vm = Get-VM -Name "big-database-server" $spec = New-Object VMware.Vim.VirtualMachineConfigSpec $spec.changeTrackingEnabled = $true $vm.ExtensionData.ReconfigVM($spec) # Create a snapshot to initialize CBT New-Snapshot -VM $vm -Name "CBT-init" -Description "Enable CBT tracking" # Use a tool like libguestfs or nbdkit to read the CBT deltas # and apply them to the target Proxmox disk # First full copy (in the background) qemu-img convert -f vmdk -O qcow2 -p \ "https://esxi/folder/big-db/big-db-flat.vmdk?dcPath=DC&dsName=DS1" \ /tmp/big-db.qcow2 # Incremental passes use CBT to identify changed blocks # and copy only the deltas, reducing final downtime to a few minutes
Strategy to minimize disruption
To minimize the impact on your users, follow this approach:
- Migrate in batches: start with non-critical VMs (dev, test, staging), then secondary services, and finally critical services
- Parallel migration: run both environments simultaneously during the transition phase
- Data replication: synchronize data in real time with CBT or a replication tool
- Progressive testing: validate each batch before moving to the next (checklist below)
- DNS / Load Balancer: gradually redirect traffic to Proxmox (low DNS TTL)
- Rollback ready: keep the VMware environment operational for 2 to 4 weeks after the last cutover
Common pitfalls and solutions
Here are the most frequent issues encountered during VMware to Proxmox migrations, and how to resolve them:
VMware Tools and driver conflicts
VMware Tools installed in VMs can cause conflicts with Proxmox virtio drivers (black screen at boot, network not working). Solution: uninstall VMware Tools BEFORE migration if possible, or immediately after in recovery mode. On Linux: apt remove open-vm-tools. On Windows: uninstall via the Control Panel.
EFI/UEFI boot failure
VMware VMs in EFI mode will not boot if Proxmox is configured with SeaBIOS. Solution: change the Proxmox VM BIOS to OVMF (UEFI) and add an EFI disk: qm set 100 --bios ovmf --efidisk0 local-lvm:1,efitype=4m. If boot still fails, check the boot order in the EFI variables.
No network after migration (missing virtio driver)
Windows VMs do not have a pre-installed virtio-net driver. The network adapter is missing after migration. Solution: before migration, temporarily add ane1000 network adapter (compatible without additional drivers) to access the network, then install the virtio-win drivers and switch to virtio. Or mount the virtio-win ISO as a CD-ROM in the VM.
Incompatible SCSI controller
VMware VMs often use pvscsi or lsilogic. Proxmox recommends virtio-scsi-single for best performance. Solution: during import, selectvirtio-scsi-single as the SCSI controller. If the VM does not boot (Windows), temporarily use lsi while installing the virtio drivers, then switch.
Lost Windows activation
The hypervisor change alters the hardware fingerprint, which can trigger Windows reactivation. Solution: for Volume licenses (KMS/MAK), reactivation is automatic or via slmgr /ato. For OEM licenses tied to VMware hardware, contact Microsoft for a license transfer. Document your license keys BEFORE migration.
Missing Linux kernel modules
Some Linux distributions compile a VMware-optimized kernel (vmw_pvscsi, vmxnet3) but without virtio modules. Solution: before migration, verify that the modulesvirtio_blk, virtio_net, virtio_scsi are present in the initramfs:lsinitramfs /boot/initrd.img-$(uname -r) | grep virtio. If not, regenerate it: update-initramfs -u.
Real migration timelines
Here are real migration benchmarks by VM size, measured on our client projects. Times include format conversion (VMDK to qcow2) and network transfer.
| VM Size | 1 Gbps Network | 10 Gbps Network | Recommended method |
|---|---|---|---|
| 10 GB | ~2-3 min | <30 sec | Import Wizard / qemu-img |
| 50 GB | ~10-15 min | ~2-3 min | Import Wizard / qemu-img |
| 200 GB | ~45-60 min | ~8-12 min | Import Wizard / CBT |
| 500 GB | ~2-3 h | ~20-30 min | Incremental CBT |
| 1 TB | ~4-6 h | ~40-60 min | Incremental CBT / Live V2V |
| 2 TB+ | ~8-12 h | ~1.5-2 h | Live V2V with continuous replication |
Thin provisioning impact
The times above are based on actually used space (thin provisioning). A VM with a provisioned 500 GB disk but only using 50 GB will migrate at the speed of 50 GB. The Proxmox Import Wizard handles this optimization automatically. With qemu-img, use -p to track actual progress.
Post-migration: validation and optimization
Driver and agent installation
After migration, optimize performance by installing the virtio drivers and QEMU Guest Agent:
# === LINUX === # Install the QEMU Guest Agent apt install qemu-guest-agent # Debian/Ubuntu dnf install qemu-guest-agent # RHEL 9 / Rocky / AlmaLinux yum install qemu-guest-agent # RHEL 7-8 / CentOS # Enable and start the service systemctl enable --now qemu-guest-agent # Verify that virtio modules are loaded lsmod | grep virtio # Expected: virtio_blk, virtio_net, virtio_scsi, virtio_balloon # Remove VMware Tools apt remove --purge open-vm-tools open-vm-tools-desktop # Debian/Ubuntu dnf remove open-vm-tools # RHEL/Rocky # Regenerate initramfs without VMware modules update-initramfs -u # Debian/Ubuntu dracut --force # RHEL/Rocky # === WINDOWS === # 1. Mount the virtio-win ISO as CD-ROM in Proxmox # (Download from https://github.com/virtio-win/virtio-win-pkg-scripts) # 2. In the Windows VM, install: # - virtio-win-drivers (network, storage, ballooning, serial) # - virtio-win-guest-tools (QEMU Guest Agent + drivers) # 3. Uninstall VMware Tools via Programs and Features # 4. Reboot
Complete validation checklist
- VM boots correctly (no boot errors, login screen visible)
- QEMU Guest Agent responds (visible in the Proxmox interface: IP shown in Summary)
- Network connectivity working (gateway ping, DNS resolution, external service access)
- All network interfaces are present and correctly configured
- Business applications operational (full functional testing)
- I/O performance meets expectations (fio or diskspd benchmark)
- System clock synchronized (NTP/chrony working)
- Proxmox Backup (PBS) configured, first backup successful and restore test validated
- Monitoring in place (CPU, RAM, disk, network alerts)
- VMware Tools uninstalled (no vmtoolsd or vmware service running)
- Windows license activated (if applicable)
- High availability configured (if Proxmox cluster)
Post-migration optimization
Once VMs are migrated and validated, apply these optimizations:
- • Enable memory ballooning: allows Proxmox to reclaim unused RAM (virtio-balloon driver)
- • Enable discard/TRIM: add
discard=onon disks to reclaim freed space - • CPU type host: use
--cpu hostto expose all CPU instructions to the VM - • NUMA topology: for VMs with large RAM, enable
--numa 1 - • IO thread: enable
iothread=1on virtio-scsi disks for better I/O performance - • Configure disaster recovery: set up a disaster recovery plan with inter-site replication
To explore our managed Proxmox VPS plans or benefit from post-VMware migration managed services , also see our case study: 100 VMware VMs migration (fr) .
# Recommended optimizations for a migrated VM (example VM 100) qm set 100 --cpu host # Native CPU qm set 100 --balloon 2048 # Ballooning (min 2 GB) qm set 100 --scsi0 local-lvm:vm-100-disk-0,discard=on,iothread=1,ssd=1 qm set 100 --numa 1 # NUMA (if > 8 GB RAM) qm set 100 --agent enabled=1,fstrim_clk_freq=1 # Guest Agent + auto TRIM # Verify the final configuration qm config 100
Cleanup and VMware decommissioning
After full validation of all migrated VMs (wait 2 to 4 weeks of stable operation), proceed with decommissioning the VMware infrastructure:
- Verify that no VM is still running on VMware (final inventory)
- Archive the vCenter configurations (export settings, DRS rules, alarms)
- Remove remaining migration snapshots on Proxmox
- Release VMware licenses (cancel Broadcom subscriptions)
- Reassign or retire the hardware from former ESXi hosts
- Update infrastructure documentation (network diagrams, CMDB, procedures)
Official documentation
To dive deeper into the concepts covered in this guide, refer to the official documentation:
Frequently asked questions about VMware to Proxmox migration
Related articles
- Proxmox vs VMware: Full Comparison 2025
- TCO VMware vs Proxmox 2026: real cost analysis
- Disaster Recovery Plan with Proxmox
- Backup strategy: the 3-2-1 rule applied
- GPU, USB and PCI Passthrough on Proxmox
- Docker on Proxmox: VM, LXC or Docker-in-LXC?
- Already on Proxmox 8? Upgrade guide to Proxmox 9
- Case study: successful VMware to Proxmox migration (fr)
- Leaving VMware: Why and How
Also read on our blogs
Need help with your VMware to Proxmox migration?
RDEM Systems has been delivering turnkey VMware to Proxmox migrations since 2017. Free infrastructure audit, detailed migration plan and expert support through to VMware decommissioning.