Proxmox Virtual Environment (PVE) | Docs | Wiki | Admin Guide

Install

Download ISO

  1. Copy ISO, e.g., proxmox-ve_8.4-1.iso, onto USB using Rufus.
  2. Boot from USB at target machine, and follow the installation prompts.
  3. Select the misleading "GUI" install, which is the normal, headless install.

Overview

Proxmox is an open-source server-management platform for enterprise virtualization; managing VMs, containers, LXC, HA clusters, and integrated disaster recovery.

Built upon Debian, it installs headless by default, providing web UI and CLI interfaces:

MiniPC:

Compute, network, and storage in a single solution.

Image Formats


Configuration


Proxmox Storage

The setup matters quite a bit for VM performance and flexibility …

What Proxmox expects by default:

The installer typically creates an LVM volume group (pve) with:

The key is that pve-data should be an LVM-thin pool, not a regular logical volume. LVM-thin gives you thin provisioning, snapshots, and efficient cloning; all critical for VM operations.

If you manually create regular LVM logical volumes, you'll lose those features. Proxmox will still work, but you'll be stuck with raw disk images and no snapshot capability.

Quick check:

lvs -o lv_name,vg_name,lv_attr,lv_size,pool_lv

root@pve [06:07:30] [1] [#0] ~
# lvs -o lv_name,vg_name,lv_attr,lv_size,pool_lv
  LV                VG  Attr       LSize    Pool
  base-9000-disk-0  pve Vri-a-tz-k    3.00g data
  data              pve twi-aotz-- <348.82g         # Thin LVM
  root              pve -wi-ao----   96.00g         # Plain LVM
  swap              pve -wi-ao----    8.00g
  vm-9000-cloudinit pve Vwi-a-tz--    4.00m data

Recommendations for a single 512GB NVMe:

Option Pros
LVM-thin (default) Simple, snapshots, thin provisioning
ZFS Compression, checksums, snapshots, better for your RAM (64GB is plenty for ARC)

Given your 64GB RAM, ZFS is actually a strong option — it'll use ~8-16GB for ARC cache which dramatically improves I/O. You could reinstall selecting ZFS, or:

What's in that 100GB you left unallocated? Was that intentional for something specific, or were you unsure about the partitioning?

Proxmox VE Ceph Reef cluster

Reef clusters are an evolution in Ceph's long-term release series, bringing improvements in scalability, performance, security, and Kubernetes integration. These advancements make CephFS more capable of handling large-scale, distributed storage requirements across various industries, while still leveraging Ceph's robust object storage foundation, RADOS.

See Ceph (https://cloud.debian.org/images/cloud/bookworm/latest/ , e.g.,
  • https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
  • Disks tab — select local-lvm for the VM's virtual hard drive
  • Create VM by CLI qm

    Proxmox CLI for managing QEMU/KVM VMs

    Command What it does
    qm create Create a VM
    qm set Modify VM config
    qm start/stop Power control
    qm importdisk Import a disk image
    qm template Convert VM to template
    qm clone Clone a VM or template
    qm list List all VMs

    There's also pct for LXC containers, and pvesm for storage (which you used earlier).

    Create VM Template

    bash debian12-template.sh
    

    Debian 12 (bookworm) configured for cloud-init method (*.qcow2)

    @ debian12-template.sh


    Network Configurations

    Proxmox VE uses a bridged networking model.

    Default

    VMs behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.

    default-network-setup-bridge

    Most hosting providers do not support this setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.

    Routed

    For publicly routable IPs:

    Minimal configuration that provides a distinct CIDR for guest VMs, which allows for a VM-based network appliance (pfSense OPNsense) to handle DHCP, DNS, etc.

    default-network-setup-routed

    Masquerading (NAT) with iptables

    NAT Bridge

    Masquerading allows guests having only a private IP address to access the network by using the host IP address for outgoing traffic. Each outgoing packet is rewritten (NAT) by iptables to appear as originating from the host, and responses are rewritten accordingly to be routed to the original sender.

    pfSense OPNsense | Docs

    The OSS fork of pfSense; a small (1CPU/1GB) network appliance (FreeBSD) on a VM; handles DHCP, DNS, etc.

    OPNsense is the canonical homelab pattern for Proxmox networking. Instead of the bare NAT bridge with iptables MASQUERADE on the PVE host (net-snat-bridge.sh), OPNsense becomes the gateway VM. Wire it into the same vmbr1 and it owns that subnet's services.

    The topology is effectively the same as "NAT Bridge" (above):

    Internet
        │
      vmbr0 (PVE host, 192.168.x.x)
        │
     OPNsense VM
      ├── WAN interface → vmbr0 (gets PVE LAN IP)
      └── LAN interface → vmbr1 (e.g. 10.0.33.1/24)
        │
      vmbr1
      ├── k0s-cp-1
      ├── k0s-worker-1
      └── ...
    

    OPNsense then provides on vmbr1:

    The implication for your current bridge config (net-snat-bridge.sh) is that vmbr1 becomes a dumb L2 segment; no address, no post-up iptables rules on the PVE host. PVE just provides the wire; OPNsense owns the L3:

    auto vmbr1
    iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
    

    The tradeoff is OPNsense is another VM to maintain and consume resources, but you get a proper network stack with UI, logging, and far more flexibility than bare iptables on the host.

    SDN (Software Defined Network)

    The PVE SDN stack is experimental.

    apt update
    apt install libpve-network-perl
    

    K0s on Proxmox

    Build Network and VMs for Kubernetes cluster

    @ ./pve/k0s-lab

    See README (MD|HTML)

    Infra Architecture and Resources Plan

    Here's our preliminiary design goal for this private network:

    Guest VMs on this pve should be on segregated network (10.0.33.0/24, perhaps) having access to, but protected from, upstream gateway router (192.168.28.1) that connects this network to the internet.

    Storage

    The 100GB is default install. Is that too much? Regarding ZFS, taking some 8-16 GB of memory, not sure if we can afford that.

    CPU/RAM

    Ryzen 7 has 16 threads. Each is a vCPU under a type-1 hypervisor (pve). So, regarding resources (cpu, ram), perhaps :

    Guest VMs total: 12 CPU / 40 GB

    That leaves 4 vCPU / 24 GB for the hypervisor (pve) and ZFS

    So, if ZFS takes 16 GB, that leaves only 6 GB for pve.

    Assessment

    Good plan overall. A few corrections and suggestions:

    CPU Misconception

    vCPUs aren't "reserved" — they're scheduled. Proxmox (KVM) time-slices physical cores across VMs. You can safely overcommit for bursty workloads. Your 12 vCPU allocation across 16 threads is conservative and fine.

    RAM Reality

    Component RAM
    Proxmox host ~1-2 GB
    IdM 4 GB
    K0s controller 4 GB
    K0s workers (2×16) 32 GB
    Total ~42 GB

    Leaves ~22 GB free. You're not using ZFS (you have LVM-thin), so that concern doesn't apply unless you reinstall.

    Assessment by Component

    Storage (100GB root): Oversized but harmless. Default installer is cautious. Not worth reinstalling.

    IdM (2 vCPU / 4 GB): Tight but workable for lab. FreeIPA runs LDAP + Kerberos + DNS + CA. If sluggish, bump to 6GB.

    K0s controller (2 vCPU / 4 GB): Fine for a 3-node cluster.

    K0s workers (4 vCPU / 16 GB each): Generous — good if you're running real workloads. Could start at 8GB and scale up.

    Network Topology

    For the segregated 10.0.33.0/24:

                        ┌─────────────────┐
                        │  192.168.28.1   │
                        │  (gateway/NAT)  │
                        └────────┬────────┘
                                 │
                  ┌──────────────┴──────────────┐
                  │ pve host (vmbr0)            │
                  │ 192.168.28.x                │
                  └──────────────┬──────────────┘
                                 │
             ┌───────────────────┴───────────────────┐
             │ vmbr1 (internal only, 10.0.33.0/24)   │
             │ no gateway, or pve as router          │
             └───────────────────┬───────────────────┘
                                 │
            ┌────────────┬───────┴────────┬────────────┐
            │            │                │            │
        k0s-ctrl     k0s-w1          k0s-w2         IdM
       10.0.33.11   10.0.33.12      10.0.33.13    10.0.33.10
    

    You'll need:

    1. Create vmbr1 (no physical interface, internal only)
    2. Either run a router VM, or enable IP forwarding on pve host with iptables NAT

    Revised Resource Allocation

    VM vCPU RAM Disk Network
    IdM 2 4-6 GB 40 GB vmbr1 (+ vmbr0 for AD trust?)
    k0s-ctrl 2 4 GB 32 GB vmbr1
    k0s-w1 4 12 GB 32 GB vmbr1
    k0s-w2 4 12 GB 32 GB vmbr1
    Total 12 32-34 GB 136 GB

    This leaves ~30GB RAM headroom for growth or burst.


    Wake on Lan (WoL)

    How to Wake a headless Proxmox node:

    Note: Ensure the NIC supports WOL, as indicated by Wake-on: g in the ethtool <interface> output.


    Proxmox v. ESXi v. OpenStack

    VMware is now owned by Broadcom, which has discontinued the Free ESXi Hypervisor : End Of General Availability of the free vSphere Hypervisor"