Proxmox 8 to 9 Migration: hands-on experience and methodology
Proxmox VE 9.0 was released in June 2025, based on Debian 13 Trixie. We have migrated our entire production infrastructure to Proxmox 9.1. This article details our methodology, our kernel opt-in approach, the Debian 12 vs 13 comparison, and the technical steps for a successful migration.
Why migrate to Proxmox 9
Proxmox VE 9.x brings significant improvements across three areas: the base OS, the Linux kernel, and virtualization components. Here are the main reasons that justify the migration.
Debian 13 Trixie: long-term support
Proxmox VE 9 is based on Debian 13 "Trixie", released in June 2025. Debian 13 will receive security support until at least 2028 (LTS until 2030+). Staying on Proxmox 8 (Debian 12 Bookworm) means progressively approaching the end of active support for Debian 12 (standard support expected to end mid-2026, LTS until 2028).
Migrating now gives us a 4+ year margin on base OS support, instead of having to plan an emergency migration when Debian 12 approaches end of life.
Up-to-date kernel and new features
Proxmox 9.x ships with a recent 6.x kernel with notable improvements: better support for recent hardware (latest-gen Intel/AMD CPUs, NVMe, network cards), I/O performance optimizations, security improvements (more efficient Spectre/Meltdown mitigations), and improved io_uring support for storage workloads.
Debian 12 end of support approaching
Even though Debian 12 LTS will remain supported until 2028, Proxmox VE 8.x packages will no longer receive new features — only security fixes. New features (GUI, SDN, Ceph) will only be available in the 9.x branch. Migrating now lets you benefit from the latest improvements.
Our choice
At RDEM Systems, we migrated our entire production infrastructure to Proxmox 9.1 as early as December 2025. Zero incidents, identical or better performance, and a fresh OS base for the next 4+ years.
Debian 12 vs 13 / Proxmox 8 vs 9 — comparative study
The move from Proxmox 8 to 9 involves a complete base OS change (Debian 12 to 13). Here is a detailed comparison of key components.
System components
| Component | Proxmox 8 (Debian 12) | Proxmox 9 (Debian 13) |
|---|---|---|
| Base OS | Debian 12 Bookworm | Debian 13 Trixie |
| Kernel | 6.8.x (pve-kernel) | 6.12.x+ (pve-kernel) |
| systemd | 252 | 256+ |
| OpenSSL | 3.0.x | 3.3.x+ |
| Python | 3.11 | 3.13+ |
| glibc | 2.36 | 2.40+ |
Proxmox VE components
| Component | Proxmox 8.4 | Proxmox 9.1 |
|---|---|---|
| QEMU | 8.1.x | 9.x |
| LXC | 6.0.x | 6.1.x |
| Ceph | Reef 18.2 | Squid 19.x |
| GUI / Web UI | ExtJS 7 | ExtJS 7 improved + new widgets |
| SDN | OVN, VXLAN, EVPN | Improved OVN, new SDN panel |
Concrete benefits of upgrading to 9.x: better hardware support, QEMU 9.x with improved I/O performance, more stable and performant Ceph Squid, web interface improvements, and a freshly supported Debian base for 4+ years.
Our approach — kernel opt-in
Before launching the full dist-upgrade from Debian 12 to Debian 13, we chose to use the kernel opt-in method. This approach involves installing the Proxmox 9 kernel on your existing Proxmox 8 installation, without migrating the base OS.
Why kernel opt-in?
- Version control: you choose exactly when to switch to the new kernel, independently of the dist-upgrade. This lets you validate the kernel on your specific hardware before changing anything else.
- Easy rollback: if the new kernel causes issues (GPU driver, exotic network card, third-party DKMS module), simply reboot on the old kernel via GRUB. No dist-upgrade rollback required.
- Production stability: validate the kernel first (the most critical component), then perform the dist-upgrade in a second step. Two stages instead of a big bang.
How to proceed
Installing the opt-in kernel takes just a few commands:
# 1. Make sure you're on Proxmox 8.4 (latest 8.x version) apt update && apt dist-upgrade # 2. Install the Proxmox 9 kernel meta-package apt install proxmox-default-kernel # 3. Pinning: ensure the new kernel takes priority proxmox-boot-tool kernel pin 6.12 # adjust to exact version number # 4. Reboot on the new kernel reboot # 5. Verification after reboot uname -r # should display 6.12.x-x-pve pveversion # still Proxmox 8.4, but with 9.x kernel
Important
The kernel opt-in is an optional step. We chose it for performance reasons — the significant gains from the new kernel justified prior validation on our production hardware. You can skip straight to the dist-upgrade if you are confident in your hardware.
Proxmox 8 to 9 migration steps
Here are the detailed technical steps we follow for each migration. Also refer to the official Proxmox documentation .
Audit and preparation
First, check the state of your infrastructure:
# Check current version pveversion -v # Run the official checker pve8to9 --full # Check disk space (minimum 5 GB free) df -h / # Back up cluster configuration tar czf /root/pve-config-backup.tar.gz /etc/pve/ # Check Ceph health (if applicable) ceph status ceph osd tree
Update to Proxmox 8.4
Make sure you are on the latest 8.x version before migrating:
# Full update to 8.4 apt update apt dist-upgrade -y # Reboot if a new kernel was installed reboot # Verify pveversion # should display 8.4-x
Migration to Proxmox 9.1
Update the repositories and launch the dist-upgrade:
# Update APT sources to Debian 13 + PVE 9 sed -i 's/bookworm/trixie/g' /etc/apt/sources.list sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/pve-*.list # Or better: manually edit each file # Update and dist-upgrade apt update apt dist-upgrade -y # Answer configuration prompts # (keep local version unless otherwise specified) # Mandatory reboot reboot
Post-migration verification
After rebooting, verify everything is working correctly:
# Proxmox version pveversion -v # should display PVE 9.1 # Kernel uname -r # should display 6.12.x-x-pve # Proxmox services systemctl status pvedaemon pveproxy pvestatd # Cluster (if multi-node) pvecm status # Ceph (if applicable) ceph status ceph osd versions # should display Squid # VMs and containers qm list pct list # GUI access: https://node:8006
Official documentation: for full details and edge cases, refer to the official Proxmox 8 to 9 migration guide .
Our hands-on experience
We migrated our entire production infrastructure to Proxmox 9.1 in December 2025. Here is our concrete feedback.
Migration scope
- Multi-node production clusters
- Ceph storage (Reef to Squid) — seamless migration
- PBS 3.x to 4.x — continuous backups without interruption
- SDN OVN — configuration preserved
Results
0
post-migration incidents
~30 min
per node (dist-upgrade + reboot)
0 s
VM downtime (live migration)
Migration pricing: to learn about our Proxmox 8 to 9 migration packages and support details, see our detailed pricing on RDEM Managed Services .
Prerequisites and key considerations
Key considerations
- Ceph: never migrate more than one Ceph node at a time. Wait until the Ceph cluster is in
HEALTH_OKbefore moving to the next node. The Ceph Reef to Squid migration happens automatically during the dist-upgrade, but rebalancing can take time. - Extensions and DKMS modules: if you use third-party kernel modules (NVIDIA GPU drivers, proprietary Intel network cards, custom ZFS), verify their compatibility with the new kernel before migrating. The kernel opt-in is your best tool for this validation.
- Backup before migration: perform a full backup of all your VMs and cluster configuration (
/etc/pve/) before starting. Use NimbusBackup or PBS for a complete backup before the maintenance window. - HA (High Availability): if your VMs are managed by the HA Manager, temporarily disable HA resources on the node you are migrating. Re-enable them once the node is back in the cluster running version 9.
- Custom repositories: if you have added third-party APT repositories (Zabbix, Docker, Grafana...), make sure they offer packages for Debian 13 Trixie before the dist-upgrade. Otherwise, temporarily disable them.