VMware | Proxmox v. ESXi v. OpenStack

VMware is now (2023) owned by Broadcom,
which has (2024) discontinued its free ESXi hypervisor"

ESXi

Components/Features:

Tools:

vCenter

Components/Features:

The technology that facilitates connection to vCenter and provides the pop-up window where you interact with the VM is primarily the VMRC or the Web Console.

VMware Remote Console (VMRC)

VMRC connects to vCenter Server, allowing users to interact directly with the guest operating system of a VM through a pop-up window. It handles the keyboard, video, and mouse (KVM) streams, making it feel like you are directly logged onto the machine. VMRC can be launched from the vSphere Web Client or directly as a standalone application.

Sometimes informally referred to as the "vSphere Remote Client". The VMRC is actually a KVM-type of utility that provides console access to guest operating systems running on VMware VMs. VMRC allows you to remotely interact with the VM's operating system as if you were sitting in front of it, complete with screen, keyboard, and mouse input.

vSphere Web Console

An alternative to VMRC, especially in newer versions of vSphere, is the web console provided directly through the HTML5-based vSphere Client. This console allows for interaction with the VM without needing to install additional software like VMRC.

Both VMRC and the web console use a combination of technologies, including proprietary VMware protocols, to provide secure and efficient access to manage and operate the VMs remotely.

These tools ensure that you have near-real-time interaction with the VMs, enabling tasks such as configuring operating systems, installing applications, and monitoring operations directly from the console window.

Guest VM Networking

[VM1] ----\
[VM2] ----> [vSwitch0] ---[vmnic0]--- [Physical Switch] --- [Router/Gateway]
[VM3] ----/

Verify Connectivity from a Guest VM

From inside a host (RHEL) of a guest VM, we can gather useful clues to infer whether the vSwitch has an uplink and whether the default gateway is reachable.


Check 1: Routing table

☩ ip route show
default via 192.168.11.1 dev eth0 proto dhcp src 192.168.11.100 metric 100
192.168.11.0/24 dev eth0 proto kernel scope link src 192.168.11.100 metric 100


Check 2: Ping the default gateway

☩ ping -c 3 -W 1  192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.

--- 192.168.11.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2056ms

If on a NAT-isolated subnet, as this host is, then failure on ping of that NAT device (AKA subnet) address is normal. Yet we can verify connectivity to an upstream gateway on another subnet:

☩ ping -c 3 -W 1  192.168.28.1
PING 192.168.28.1 (192.168.28.1) 56(84) bytes of data.
64 bytes from 192.168.28.1: icmp_seq=1 ttl=63 time=4.22 ms
64 bytes from 192.168.28.1: icmp_seq=2 ttl=63 time=6.67 ms
64 bytes from 192.168.28.1: icmp_seq=3 ttl=63 time=4.35 ms

--- 192.168.28.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.217/5.080/6.672/1.126 ms

Check 3: Trace the path to an external host

# To gateway router of another subnet
☩ traceroute 192.168.28.1
traceroute to 192.168.28.1 (192.168.28.1), 30 hops max, 60 byte packets
 1  _gateway (192.168.11.1)  0.345 ms  0.319 ms  0.304 ms
 2  192.168.28.1 (192.168.28.1)  4.150 ms  4.545 ms  4.108 ms

# To internet host
☩ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  _gateway (192.168.11.1)  0.515 ms  0.488 ms  0.473 ms
 2  192.168.28.1 (192.168.28.1)  5.500 ms  5.616 ms  5.452 ms
 3  172.27.219.3 (172.27.219.3)  15.673 ms 172.27.219.2 (172.27.219.2)  22.335 ms  19.875 ms
 4  po-53-317-rur201.gambrills.md.bad.comcast.net (68.86.252.33)  19.809 ms po-53-318-rur202.gambrills.md.bad.comcast.net (68.86.252.69)  17.724 ms *
 5  * * *
 6  po-200-xar01.gambrills.md.bad.comcast.net (96.216.84.61)  22.059 ms  26.977 ms  26.948 ms
 7  ae-99-rar01.capitolhghts.md.bad.comcast.net (162.151.61.121)  20.781 ms  25.308 ms  25.283 ms
 8  be-3-arsc1.capitolhghts.md.bad.comcast.net (96.110.235.69)  25.220 ms  16.551 ms  21.570 ms
 9  be-31421-cs02.beaumeade.va.ibone.comcast.net (96.110.40.21)  19.398 ms be-3104-pe04.ashburn.va.ibone.comcast.net (96.110.37.130)  19.331 ms be-3311-pe11.ashburn.va.ibone.comcast.net (96.110.32.130)  17.314 ms
10  * * be-3312-pe12.ashburn.va.ibone.comcast.net (96.110.34.122)  18.597 ms
11  * * *
12  * * dns.google (8.8.8.8)  21.271 ms

Check 4: Interface and ARP info

# Does public interface (here, eth0) have (DHCP) assigned IP address
☩ ip -4 addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.11.100/24 brd 192.168.11.255 scope global dynamic noprefixroute eth0
       valid_lft 200397sec preferred_lft 200397sec

# Is gateway IP in ARP table
☩ ip neigh show
192.168.11.2 dev eth0 lladdr 00:15:5d:1c:2f:01 REACHABLE
192.168.11.1 dev eth0 lladdr 00:15:5d:1c:2f:00 REACHABLE

Optional: Check DNS resolution

☩ dig google.com

; <<>> DiG 9.16.23-RH <<>> google.com
;; global options: +cmd
;; Got answer:
...
;; ANSWER SECTION:
google.com.             117     IN      A       142.251.163.138
...
;; Query time: 18 msec
;; SERVER: 192.168.11.2#53(192.168.11.2)
;; WHEN: Fri May 09 07:55:10 EDT 2025
;; MSG SIZE  rcvd: 135

❌ What you cannot see from the VM:

You’ll need access to vSphere or ESXi host CLI to see those.


Virtual Desktop Infrastructure (VDI)

This uses VMware Horizon; a VDI platform to provide a virtual desktop and application services, allowing users to access personal desktop environments hosted on VMware ESXi servers through lightweight or thin client devices.

This VDI setup is particularly popular in enterprise environments where security, management, and cost efficiency are paramount. It allows organizations to provide a standardized set of applications and resources to users, regardless of their physical location or the capabilities of their local hardware.

Here's how it generally works:

Prior to authentication, the client pop-up in a VMware Horizon VDI environment is typically part of the Horizon Client application that's installed on the thin client. This client software is responsible for initiating the connection to the VMware Horizon Connection Server.

Here's how the sequence usually unfolds:

Process

  1. User Authentication: The user logs into the Horizon Client installed on their thin client or another device. Authentication is usually handled by the Connection Server, which may integrate with enterprise directory services like Microsoft Active Directory.

  2. Desktop Provisioning: Once authenticated, the Connection Server brokers a connection to a virtual desktop instance hosted on one of the ESXi hosts. This desktop can be a persistent desktop, which maintains user data and settings between sessions, or a non-persistent desktop, which provides a clean state at every login.

  3. Connection: The Horizon Client on the thin client establishes a connection to the virtual desktop using VMware's PCoIP (PC-over-IP) or Blast Extreme protocols. These protocols are designed to deliver high-performance virtual desktop experiences over the network with efficient compression and encryption.

  4. User Session: The user interacts with the Windows environment as if they were using a local PC, but all processing happens on the server-side. The thin client simply displays the desktop and handles input and output.

Advantages

Workstation

ESXi Subnets and Firewalls

@ ChatGPT4o

Q:

Regarding VMs under ESXi, shouldn't each set of logically related VMs have their own firewalled subnet defined and managed via the hypervisor and perhaps allowing prgrammatic access from the VMs?

A:

Yes, it is a good practice to segment logically related VMs into their own firewalled subnets within an ESXi environment. This approach provides several advantages:

Advantages of Firewalled Subnets for Logically Related VMs:

  1. Security: By isolating groups of VMs, you can limit the blast radius of any potential security incidents. If one VM is compromised, the attacker's ability to move laterally within the network is restricted.

  2. Traffic Control: Firewalled subnets allow you to control the flow of traffic between different groups of VMs. This ensures that only necessary and approved communication paths are allowed, reducing the risk of unauthorized access.

  3. Compliance: Many regulatory frameworks require strict network segmentation to protect sensitive data. Isolating VMs into different subnets can help meet these requirements.

  4. Performance Management: Segregating traffic can help in managing and optimizing network performance by reducing congestion and ensuring that critical applications have the necessary bandwidth.

Implementation in ESXi:

  1. Virtual Switches (vSwitches): Use ESXi's virtual switches to create isolated networks for different groups of VMs. Each vSwitch can have its own set of port groups and VLANs.

    • Port groups define the network configuration, including VLAN ID, network traffic policies, and security settings for the connected VMs.
  2. Distributed Virtual Switches (vDS): For larger environments, consider using VMware vSphere Distributed Switches, which provide centralized management of network configurations across multiple ESXi hosts.

  3. Firewall Rules: Configure firewall rules to control traffic between subnets. This can be done using VMware NSX (Networking and Security Virtualization) if available, or through physical firewalls and network devices if NSX is not in use.

  4. Programmatic Access: Allow programmatic access to manage and configure network settings from the VMs. This can be achieved using VMware's APIs and tools like PowerCLI for scripting and automation.

Steps to Set Up Firewalled Subnets:

Terraform, or Ansible module community.vmware, can be used for this.

  1. Define Subnets: Plan and define the subnets based on the logical grouping of VMs.
  2. Create vSwitches/vDS: Set up virtual switches or distributed switches in ESXi to host the subnets.
  3. Assign VLANs: Assign VLAN IDs to the port groups on the switches to segregate the network traffic.
  4. Configure Firewalls: Implement firewall rules to control the traffic between different subnets. If using NSX, leverage its micro-segmentation capabilities.
  5. Programmatic Configuration: Use scripts and automation tools to manage network configurations and ensure consistency across the environment.

Example:

Each of these subnets would be isolated on different vSwitches or port groups with appropriate VLANs and firewall rules to control the traffic flow between them, ensuring that development servers cannot directly access production databases, for instance.

This segmentation and isolation strategy is a key part of a robust network security posture in a virtualized environment like ESXi.

Create VMs via IaC method

1. Terraform / vSphere Provider

2. Ansible : community.vmware.vmware_guest

Module to create and manage VMs on vSphere.

3. vSphere Automation SDK

VMware provides SDKs for Python, Go, and other languages to automate VM creation and management via API calls.