InfrastructureFebruary 13, 202615 min read

    Docker on Proxmox: VM, native LXC or Docker-in-LXC? The guide to choosing

    Three approaches coexist for running containers on Proxmox VE. Two are legitimate, one is a trap. This guide compares performance, security, isolation and use cases to help you choose β€” without dogma, but with a clear opinion.

    Docker on Proxmox: three approaches, one right choice

    When managing a Proxmox cluster, the question always comes up eventually: where should Docker run? On a classic VM? Directly in an LXC container? Or β€” and this is where things get complicated β€” by nesting Docker inside an LXC?

    The temptation is understandable: LXC uses less RAM than a VM, boots in seconds, and shares the host kernel. Why "waste" resources on a full VM when LXC can do everything?

    The short answer: because isolation, security and maintainability matter more than saving a few hundred MB of RAM. And because Docker-in-LXC, despite the tutorials floating around, is an anti-pattern that always ends up causing problems in production.

    This article reviews the three approaches β€” KVM VM + Docker, native LXC (without Docker), and Docker-in-LXC with nesting β€” with concrete comparisons on performance, security, and real-world use cases.

    Architecture comparison: the 3 models

    Before diving into the details, let's visualize how each approach stacks its layers:

    KVM VM + Docker

    Docker Containers
    Docker Engine
    Guest OS (Linux)
    KVM / QEMU
    Proxmox Host

    Recommended for production

    Native LXC (no Docker)

    Application
    System Dependencies
    LXC Rootfs
    Namespaces / cgroups
    Proxmox Host Kernel

    Clean for simple services

    Docker-in-LXC (nesting)

    Docker Containers
    Docker Engine
    LXC Rootfs
    Nested Namespaces
    Proxmox Host Kernel

    Anti-pattern β€” avoid

    The fundamental difference: in a KVM VM, Docker runs on its own isolated kernel, exactly like on a dedicated server. In an LXC, everything shares the host kernel β€” including Docker if you install it there, with all the risks that implies.

    Performance

    The main argument in favor of LXC is performance. Let's see what the numbers actually show:

    CriterionKVM VM + DockerNative LXCDocker-in-LXC
    CPU Overhead1–3 % (VT-x)< 1 %< 1 %
    Disk I/ONative (virtio-scsi)Nativeoverlay2 overhead
    Network~Native (virtio-net)Native (veth)Possible double NAT
    Boot time15–30 s1–3 s5–15 s (LXC + Docker daemon)
    RAM footprint+256–512 MB (guest kernel)Minimal+100–200 MB (Docker daemon)

    In practice: On modern hardware with VT-x/VT-d, the overhead of a KVM VM is negligible for most workloads. The difference is measured in single-digit percentages. The "LXC is faster" argument is true on paper, but rarely decisive in production. The additional RAM for the guest kernel (256–512 MB) is the real cost β€” an acceptable price for full isolation.

    Security and isolation

    This is where the differences become critical. Performance is a trade-off, security is a requirement.

    CriterionKVM VMNative LXCDocker-in-LXC
    Kernel isolationFull (dedicated kernel)Partial (host kernel)Weak (host kernel + nesting)
    Attack surfaceMinimal (QEMU + virtio)Medium (host syscalls)Large (syscalls + nesting)
    AppArmorNot requiredActive by defaultOften disabled
    Live migrationYes (hot)No (shutdown required)No (shutdown required)
    Container escape to hostExtremely difficultPossible (kernel CVE)Increased risk
    Multi-tenantYes β€” full isolationRiskyDangerous

    KVM VM: hardware isolation

    A KVM VM runs its own kernel in a memory space isolated by the processor (Intel VT-x / AMD-V). An exploit inside a Docker container reaches at most the guest kernel β€” not the host. Additionally, live migration allows moving a VM between cluster nodes with zero downtime, which is impossible with LXC.

    Docker-in-LXC: why it's an anti-pattern in production

    Docker inside LXC combines the worst of both worlds: the Docker container shares the host kernel via LXC, nesting weakens namespace protections, and the necessary workarounds (disabling AppArmor, extended permissions) significantly expand the attack surface. A Docker exploit can reach all the way up to the Proxmox host β€” without the guest kernel barrier that a VM provides.

    Docker-in-LXC: why it's problematic

    To run Docker inside a Proxmox LXC container, you need to enable nesting β€” the nesting of Linux namespaces. Here's what that entails in practice:

    Required LXC configuration

    # /etc/pve/lxc/<CTID>.conf
    
    # Enable nesting (required for Docker)
    features: nesting=1,keyctl=1
    
    # Sometimes needed for overlay2
    features: nesting=1,keyctl=1,fuse=1
    
    # If it still doesn't work... (danger)
    lxc.apparmor.profile: unconfined
    lxc.cgroup2.devices.allow: a
    lxc.cap.drop:
    lxc.mount.auto: proc:rw sys:rw

    Each line added further weakens the container's isolation. The last four lines essentially disable nearly all LXC security.

    Nesting = weakened isolation

    The nesting=1 flag allows the creation of nested namespaces. Docker then creates sub-namespaces within a space already shared with the host. The boundary between "container" and "host" becomes blurred.

    AppArmor often disabled

    Docker and LXC's AppArmor regularly conflict. The most common "solution" found on forums: lxc.apparmor.profile: unconfined. This disables Mandatory Access Control β€” the last line of defense between the container and the host.

    Issues with cgroups v2 and overlay2

    Proxmox has used cgroups v2 by default since version 7. Some Docker-in-LXC configurations require additional workarounds for the overlay2 storage driver, or even a fallback to vfs β€” which is significantly less performant.

    keyctl and FUSE: expanded permissions

    Enabling keyctl=1 and fuse=1 gives the LXC container access to syscalls that are normally restricted. Each additional permission is a potential vector for privilege escalation to the host.

    The end result: to run Docker in an LXC, you progressively disable the protections that justify LXC's existence. You end up with a container that is neither as secure as a standard LXC, nor as isolated as a VM β€” the worst of both worlds.

    Native LXC: when it's the right choice

    To be clear: the problem isn't LXC itself β€” it's Docker inside LXC. Used natively, a Proxmox LXC container is a legitimate tool for certain use cases:

    LXC shines for simple services

    • Reverse proxy (nginx, HAProxy) β€” lightweight, no Docker needed
    • Recursive DNS (Unbound, Pi-hole) β€” single service, system configuration
    • Small static web server β€” Apache/nginx serving files
    • Lightweight monitoring β€” collection agents, Prometheus exporters
    • System tools β€” VPN, SSH bastion, jumpbox

    Native LXC limitations

    As soon as you need docker-compose, a CI/CD pipeline, OCI images or a container orchestration ecosystem, native LXC reaches its limits. There's no docker pull, no Dockerfile, no Docker volumes. For these use cases, a VM with Docker is the right answer. And an important reminder: live migration is not available with LXC β€” moving between nodes requires stopping and restarting the container.

    Use cases: when to choose what

    ScenarioRecommendation
    Production with DockerKVM VM + Docker β€” isolation, live migration, standard
    Microservices / docker-composeKVM VM + Docker β€” full ecosystem
    CI/CD (GitLab Runner, Jenkins)KVM VM + Docker β€” isolated builds, native DinD
    Multi-tenant / isolated clientsKVM VM β€” full isolation mandatory
    Simple service (DNS, proxy, static web)Native LXC β€” lightweight and clean, no Docker
    Quick dev/test with DockerDocker-in-LXC acceptable β€” if disposable and not exposed
    Legacy application without DockerNative LXC or VM depending on required isolation

    The honest verdict

    VM + Docker

    The production choice

    • Full isolation (dedicated kernel)
    • Standard Docker, no hacks
    • Live migration between nodes
    • Snapshots and PBS backup

    Native LXC

    Clean for simple tasks

    • Instant boot
    • Minimal resource usage
    • No Docker / no OCI
    • No live migration

    Docker-in-LXC

    The trap to avoid

    • Weakened security (nesting)
    • Fragile workarounds
    • False RAM savings
    • No live migration

    Our approach at RDEM Systems

    At RDEM Systems, we made a clear choice: 100% KVM VMs, no LXC in our production infrastructure. Here's why.

    Why 100% VMs at RDEM Systems

    Full client isolation

    Each client has dedicated VMs with their own kernel. No kernel sharing between tenants β€” a compromise in one client's environment cannot affect the others.

    Docker works without hacks

    docker-compose, volumes, overlay2, multi-stage builders β€” everything works natively in a VM, without nesting, without workarounds, without surprises after a Proxmox update.

    Live migration between hypervisors

    We move VMs between cluster nodes with zero downtime β€” for maintenance, load balancing or hypervisor updates. This is impossible with LXC, which requires stopping and restarting.

    Clean snapshots and PBS backups

    VMs benefit from consistent snapshots (with qemu-guest-agent for fsfreeze) and PBS backups with deduplication β€” our 3-2-1 backup strategy relies entirely on VMs.

    The cost: yes, each VM consumes 256–512 MB more RAM than an equivalent LXC for the guest kernel. On a cluster with hundreds of GB of RAM, this is a marginal price for full isolation, live migration and worry-free maintenance.

    Discover our managed Proxmox VPS and VDS : all our production VMs are deployed with Docker on KVM, with PBS backup and live migration included.

    Best practices

    Docker on Proxmox VM checklist

    • Install qemu-guest-agent in each VM: allows Proxmox to communicate with the VM (fsfreeze for snapshots, clean shutdown)
    • Separate Docker storage: dedicate a virtio-scsi disk to the /var/lib/docker directory β€” avoids saturating the rootfs and simplifies sizing
    • cgroups v2 enabled: Proxmox and Docker natively support cgroups v2 β€” don't switch to v1 to work around an LXC issue
    • Limit resources: configure CPU and RAM limits in Proxmox and in Docker constraints (deploy.resources)
    • Regular PBS backups: schedule daily backups with verification (verify jobs) β€” Docker containers are ephemeral, but volumes are not
    • Bridge or VLAN networking: use Proxmox bridges or VLANs rather than Docker's default NAT for clean, performant networking
    • Docker + VM monitoring: monitor metrics at both levels β€” a swapping VM or an OOM Docker daemon won't be visible at the Proxmox level alone

    Frequently asked questions

    Official documentation

    To dive deeper into the concepts covered in this article:

    Related articles

    Need a well-architected Docker on Proxmox setup?

    We deploy and manage Proxmox clusters with Docker on VMs β€” full isolation, live migration, PBS backups. No LXC, no hacks, no surprises.