Proxmox VE (PVE) | Download ISO
Install
Copy ISO, e.g., proxmox-ve_8.4-1.iso
, onto USB using Rufus. Boot from USB at target machine, and follow the installation prompts. Select the misleading "GUI" install, which is the normal, headless install.
Overview
Proxmox Virtual Environment (pve
) is an open-source server-management platform for enterprise virtualization; VMs, containers, HA clusters and integrated disaster recovery. Built upon Debian, it installs headless by default, providing a web UI available locally, e.g., https://192.168.28.181:8006
, and SSH access (root@192.168.28.181
).
Compute, network, and storage in a single solution.
KVM hypervisor : Manage VMs; run almost any OS.LXC : A kind of lightweight VM; a container that behaves more like a full Linux OS with its ownsystemd
(init system) and user space.- SDS : Software-defined Storage
- SDN : Software-defined Networking
- Web UI
Image Formats
.vmdk
: VMware (ESXi) proprietary.qcow2
.vdi
Proxmox v. ESXi v. OpenStack
VMware is now owned by Broadcom, which has discontinued the Free ESXi Hypervisor : End Of General Availability of the free vSphere Hypervisor"
Proxmox VE Ceph Reef cluster
Reef clusters are an evolution in Ceph's long-term release series, bringing improvements in scalability, performance, security, and Kubernetes integration. These advancements make CephFS more capable of handling large-scale, distributed storage requirements across various industries, while still leveraging Ceph's robust object storage foundation,
RADOS .
Ceph Benchmark
Fast SSDs and network speeds in a . Current fast SSD disks provide great performance, and fast network cards are becoming more affordable. Hence, this is a good point to reevaluate how quickly different network setups for Ceph can be saturated depending on how many OSDs are present in each node. Summary
In this paper we will present the following three key findings regarding hyper-converged Ceph setups with fast disks and high network bandwidth:
- Our benchmarks show that a 10 Gbit/s network can be easily overwhelmed. Even when only using one very fast disk the network becomes a bottleneck quickly.
- A network with a bandwidth of 25 Gbit/s can also become a bottleneck. Nevertheless, some improvements can be gained through configuration changes. Routing via FRR is preferred for a full-mesh cluster over Rapid Spanning Tree Protocol (RSTP). If no fallback is needed, a simple routed setup may also be a (less resilient) option.
- When using a 100 Gbit/s network the bottleneck in the cluster seems to finally shift away from the actual hardware and toward the Ceph client. Here we observed write speeds of up to 6000 MiB/s and read speeds of up to 7000 MiB/s for a single client. However, when using multiple clients in parallel, writing at up to 9800 MiB/s and reading at 19 500 MiB/s was possible.