Proxmox Virtual Environment (PVE) | Docs | Wiki | Admin Guide
Install
- Copy ISO, e.g.,
proxmox-ve_8.4-1.iso, onto USB using Rufus. - Boot from USB at target machine, and follow the installation prompts.
- Select the misleading "GUI" install, which is the normal, headless install.
Overview
Proxmox is an open-source server-management platform for enterprise virtualization; managing VMs, containers, LXC, HA clusters, and integrated disaster recovery.
Built upon Debian, it installs headless by default, providing web UI and CLI interfaces:
MiniPC:
- WebUI : https://192.168.28.181:8006
- SSH: root@192.168.28.181
Compute, network, and storage in a single solution.
KVM hypervisor : Manage VMs; run almost any OS.LXC : A kind of lightweight VM; a container that behaves more like a full Linux OS with its ownsystemd(init system) and user space.- SDS : Software-defined Storage
- SDN : Software-defined Networking
- Web UI
Image Formats
.vmdk: VMware (ESXi) proprietary.qcow2.vdi
Configuration
- Allow updates sans subscription
- Enable non-production updates at
/etc/apt/sources.list - Comment out the enterprise list at
/etc/apt/sources.list.d/pve-enterprise.list
- Enable non-production updates at
- Storage
- ZFS
- IOMMU
- Enables host device passthrough; must be supported by cpu and mainboard and enabled in bios.
Proxmox Storage
The setup matters quite a bit for VM performance and flexibility …
What Proxmox expects by default:
The installer typically creates an LVM volume group (pve) with:
pve-root— ext4 for the OS (~30-100GB)pve-swap— swappve-data— LVM-thin pool for VM disks
The key is that pve-data should be an LVM-thin pool,
not a regular logical volume.
LVM-thin gives you thin provisioning,
snapshots, and efficient cloning;
all critical for VM operations.
If you manually create regular LVM logical volumes, you'll lose those features. Proxmox will still work, but you'll be stuck with raw disk images and no snapshot capability.
Quick check:
lvs -o lv_name,vg_name,lv_attr,lv_size,pool_lv
root@pve [06:07:30] [1] [#0] ~
# lvs -o lv_name,vg_name,lv_attr,lv_size,pool_lv
LV VG Attr LSize Pool
base-9000-disk-0 pve Vri-a-tz-k 3.00g data
data pve twi-aotz-- <348.82g # Thin LVM
root pve -wi-ao---- 96.00g # Plain LVM
swap pve -wi-ao---- 8.00g
vm-9000-cloudinit pve Vwi-a-tz-- 4.00m data
- The
tattribute indicates thin pool;Vindicates thin volume.
Plain LVM reports-wi-a-----attributes.
Recommendations for a single 512GB NVMe:
| Option | Pros |
|---|---|
| LVM-thin (default) | Simple, snapshots, thin provisioning |
| ZFS | Compression, checksums, snapshots, better for your RAM (64GB is plenty for ARC) |
Given your 64GB RAM, ZFS is actually a strong option — it'll use ~8-16GB for ARC cache which dramatically improves I/O. You could reinstall selecting ZFS, or:
What's in that 100GB you left unallocated? Was that intentional for something specific, or were you unsure about the partitioning?
Proxmox VE Ceph Reef cluster
Reef clusters are an evolution in Ceph's long-term release series, bringing improvements in scalability, performance, security, and Kubernetes integration. These advancements make CephFS more capable of handling large-scale, distributed storage requirements across various industries, while still leveraging Ceph's robust object storage foundation,
RADOS .
See Ceph (https://cloud.debian.org/images/cloud/bookworm/latest/ , e.g.,
local-lvm for the VM's virtual hard driveCreate VM by CLI qm
Proxmox CLI for managing QEMU/KVM VMs
| Command | What it does |
|---|---|
qm create |
Create a VM |
qm set |
Modify VM config |
qm start/stop |
Power control |
qm importdisk |
Import a disk image |
qm template |
Convert VM to template |
qm clone |
Clone a VM or template |
qm list |
List all VMs |
There's also pct for LXC containers, and pvesm for storage (which you used earlier).
Create VM Template
bash debian12-template.sh
Debian 12 (bookworm) configured for cloud-init method (*.qcow2)
Network Configurations
Proxmox VE uses a bridged networking model.
Default
VMs behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.
Most hosting providers do not support this setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.
Routed
For publicly routable IPs:
Minimal configuration that provides a distinct CIDR for guest VMs, which allows for a VM-based network appliance (pfSense OPNsense) to handle DHCP, DNS, etc.
Masquerading (NAT) with iptables
NAT Bridge
Masquerading allows guests having only a private IP address to access the network by using the host IP address for outgoing traffic. Each outgoing packet is rewritten (NAT) by iptables to appear as originating from the host, and responses are rewritten accordingly to be routed to the original sender.
pfSense OPNsense | Docs
The OSS fork of pfSense; a small (1CPU/1GB) network appliance (FreeBSD) on a VM; handles DHCP, DNS, etc.
OPNsense is the canonical homelab pattern for Proxmox networking.
Instead of the bare NAT bridge with iptables MASQUERADE on the PVE host (net-snat-bridge.sh),
OPNsense becomes the gateway VM.
Wire it into the same vmbr1 and it owns that subnet's services.
The topology is effectively the same as "NAT Bridge" (above):
Internet
│
vmbr0 (PVE host, 192.168.x.x)
│
OPNsense VM
├── WAN interface → vmbr0 (gets PVE LAN IP)
└── LAN interface → vmbr1 (e.g. 10.0.33.1/24)
│
vmbr1
├── k0s-cp-1
├── k0s-worker-1
└── ...
OPNsense then provides on vmbr1:
- DHCP — static leases by MAC for your k0s nodes
- DNS — Unbound with local overrides (so
cp1.k0s.localresolves internally) - NAT/routing — replaces your
post-up iptables MASQUERADErules on the host - Firewall — inter-VM rules, egress filtering
- NTP — useful for cluster cert validity
The implication for your current bridge config (net-snat-bridge.sh)
is that vmbr1 becomes a dumb L2 segment;
no address, no post-up iptables rules on the PVE host.
PVE just provides the wire; OPNsense owns the L3:
auto vmbr1
iface vmbr1 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
The tradeoff is OPNsense is another VM to maintain and consume resources, but you get a proper network stack with UI, logging, and far more flexibility than bare iptables on the host.
SDN (Software Defined Network)
The PVE SDN stack is experimental.
apt update
apt install libpve-network-perl
K0s on Proxmox
Build Network and VMs for Kubernetes cluster
@ ./pve/k0s-lab
Infra Architecture and Resources Plan
Here's our preliminiary design goal for this private network:
- One 3-node K0s cluster (1 control, 2 worker) on Debian 12.
- One RHEL 9 IdM domain controller having cross-forest trust under AD (WinSrv 2019) domain controller that is on another subnet (NAT network on 10.0.11.0/24). AD is the authoritative IdP.
Guest VMs on this pve should be on segregated network (10.0.33.0/24, perhaps) having access to, but protected from, upstream gateway router (192.168.28.1) that connects this network to the internet.
Storage
The 100GB is default install. Is that too much? Regarding ZFS, taking some 8-16 GB of memory, not sure if we can afford that.
CPU/RAM
Ryzen 7 has 16 threads. Each is a vCPU under a type-1 hypervisor (pve). So, regarding resources (cpu, ram), perhaps :
- IdM : 2 vCPU / 4 GB
- K0s control: 2 vCPU / 4 GB
- K0s worker: 4 vCPU / 16 GB
Guest VMs total: 12 CPU / 40 GB
That leaves 4 vCPU / 24 GB for the hypervisor (pve) and ZFS
So, if ZFS takes 16 GB, that leaves only 6 GB for pve.
Assessment
Good plan overall. A few corrections and suggestions:
CPU Misconception
vCPUs aren't "reserved" — they're scheduled. Proxmox (KVM) time-slices physical cores across VMs. You can safely overcommit for bursty workloads. Your 12 vCPU allocation across 16 threads is conservative and fine.
RAM Reality
| Component | RAM |
|---|---|
| Proxmox host | ~1-2 GB |
| IdM | 4 GB |
| K0s controller | 4 GB |
| K0s workers (2×16) | 32 GB |
| Total | ~42 GB |
Leaves ~22 GB free. You're not using ZFS (you have LVM-thin), so that concern doesn't apply unless you reinstall.
Assessment by Component
Storage (100GB root): Oversized but harmless. Default installer is cautious. Not worth reinstalling.
IdM (2 vCPU / 4 GB): Tight but workable for lab. FreeIPA runs LDAP + Kerberos + DNS + CA. If sluggish, bump to 6GB.
K0s controller (2 vCPU / 4 GB): Fine for a 3-node cluster.
K0s workers (4 vCPU / 16 GB each): Generous — good if you're running real workloads. Could start at 8GB and scale up.
Network Topology
For the segregated 10.0.33.0/24:
┌─────────────────┐
│ 192.168.28.1 │
│ (gateway/NAT) │
└────────┬────────┘
│
┌──────────────┴──────────────┐
│ pve host (vmbr0) │
│ 192.168.28.x │
└──────────────┬──────────────┘
│
┌───────────────────┴───────────────────┐
│ vmbr1 (internal only, 10.0.33.0/24) │
│ no gateway, or pve as router │
└───────────────────┬───────────────────┘
│
┌────────────┬───────┴────────┬────────────┐
│ │ │ │
k0s-ctrl k0s-w1 k0s-w2 IdM
10.0.33.11 10.0.33.12 10.0.33.13 10.0.33.10
You'll need:
- Create
vmbr1(no physical interface, internal only) - Either run a router VM, or enable IP forwarding on pve host with iptables NAT
Revised Resource Allocation
| VM | vCPU | RAM | Disk | Network |
|---|---|---|---|---|
| IdM | 2 | 4-6 GB | 40 GB | vmbr1 (+ vmbr0 for AD trust?) |
| k0s-ctrl | 2 | 4 GB | 32 GB | vmbr1 |
| k0s-w1 | 4 | 12 GB | 32 GB | vmbr1 |
| k0s-w2 | 4 | 12 GB | 32 GB | vmbr1 |
| Total | 12 | 32-34 GB | 136 GB |
This leaves ~30GB RAM headroom for growth or burst.
Wake on Lan (WoL)
How to Wake a headless Proxmox node:
- Configure for WoL:
- BIOS/UEFI:
- Disable: "
ERP Ready" - Enable: "
Resume By PCI-E Device"
- Disable: "
Install
ethtool(installed by default at pve v8.4.1):apt install ethtool -yEnable WoL on the public-facing interface (
$ifc):ethtool -s $ifc wol g # Wake on Magic Packet ethtool -s $ifc wol u # Wake on any trafficMake it persistent by appending to the interfaces file …
tee -a /etc/network/interfaces <<-EOH post-up /sbin/ethtool -s $ifc wol g EOH
- BIOS/UEFI:
- Wake Proxmox (pve):
- Send Magic Packet:
- Use a WoL app on remote machine to send magic packet to Proxmox's MAC address.
SSH config
Host proxmox pve HostName 192.168.1.181 User root # Runs WoL cmd locally before SSH session ProxyCommand sh -c "wakeonlan <MAC_ADDR> && sleep 30; nc %h %p"
- Send Magic Packet:
Wake guest VM on pve:
qm sendkey $vm_id # Wake via SSH ProxyCommand methodAutomation: Tools like Home Assistant can be configured to detect network activity and automatically send the wake-on-lan packet to boot the server.
Note: Ensure the NIC supports WOL, as indicated by Wake-on: g in the ethtool <interface> output.
Proxmox v. ESXi v. OpenStack
VMware is now owned by Broadcom, which has discontinued the Free ESXi Hypervisor : End Of General Availability of the free vSphere Hypervisor"