FIO : fio : Flexible I/O Tester (Benchmarking)

Install

@ RHEL

sudo dnf update -y
sudo dnf install -y fio

# Else
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
sudo dnf install -y fio

# Verify
fio --version # fio-3.13

@ Win

choco install -y fio

:: Verify
fio.exe --version

TL;DR

Phy I/O performance is 30% improvement over VM, and 10x over WSL.

Performance at Random RW (4k)

Type IOPS [k] BW [MB/s]
Phy 67 273
VM 51 209
WSL2 4.7 19
NFS 2.4 0.99

Two target/test options : Device or FS

FS

  1. Testing filesystem performance (ext4, XFS, etc.).
  2. Simulates real-world file access patterns (e.g., databases, logs).
  3. Avoids direct hardware access (safer for shared systems).

Device

  1. Measuring raw disk performance (bypassing filesystem).
  2. Benchmarking SSDs/NVMe drives for maximum throughput/latency.
  3. Avoiding filesystem caching effects.

@ Container : nixery.dev/shell/fio:latest

☩ k exec -it test-fio-pod -- fio --name=randrw \
    --rw=randrw \
    --size=1G \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --time_based \
    --ioengine=libaio \
    --group_reporting \
    --filename=192.168.11.100:/srv/nfs/k8s/fiotest \
    |grep -e read: -e write:

  read: IOPS=37.6k, BW=147MiB/s (154MB/s)(8806MiB/60001msec)
  write: IOPS=37.5k, BW=147MiB/s (154MB/s)(8799MiB/60001msec); 0 zone resets

@ NVMe : Random R/W (bs=4k)

AirDisk (Benchmarked at 1/3 performance of name brands)

--randread and --randwrite measure peak performance for a single operation.

--randrw measures realistic mixed workload performance, where reads and writes compete.

Expect 2x BW and 2x IOPS at --randread or --randwrite relative to --randrw performance. The latter better represents real world (mixed) performance. Sum of the former (pure read or write) is theoretical performance.

@ Windows physical machine where C: is NVMe SSD having about 1/3 performance of most name brands:

S:\>fio.exe --rw=randrw --name=test  --size=1G --bs=4k --iodepth=32 --runtime=60 --group_reporting --filename=C:\testfile
...
  read: IOPS=66.8k, BW=261MiB/s (273MB/s)(512MiB/1963msec)
  ...
  write: IOPS=66.8k, BW=261MiB/s (274MB/s)(512MiB/1963msec)
  ...

@ Hyper-V VM (u1@a0) : RHEL9 : Dynamic disk : /dev/sdb

☩ sudo fio --name=randrw \
    --rw=randrw \
    --size=1G \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --ioengine=libaio \
    --group_reporting \
    --filename=/dev/sdb

  ...
  read:  IOPS=51.0k, BW=199MiB/s (209MB/s)(512MiB/2570msec)
  ...
  write: IOPS=51.0k, BW=199MiB/s (209MB/s)(512MiB/2570msec); 0 zone resets
  ...

@ Hyper-V VM (u1@a0) : RHEL9 : Static disk : /dev/sdc

☩ sudo fio --name=randrw \
    --rw=randrw \
    --size=1G \
    --rw=randrw \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --ioengine=libaio \
    --group_reporting \
    --filename=/dev/sdc

...
  read:  IOPS=45.6k, BW=178MiB/s (187MB/s)(512MiB/2875msec)
  ...
  write: IOPS=45.6k, BW=178MiB/s (187MB/s)(512MiB/2875msec); 0 zone resets
  ...

Get pure (AKA theoretical) IOPS: Sum that of read and write:

@ Hyper-V VM (u1@a0) : RHEL9 : Static disk : /dev/sdc

☩ sudo fio --name=randrw  \
    --rw=randrw \
    --size=1G \
    --rw=randrw \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --ioengine=libaio \
    --group_reporting \
    --filename=/dev/sdc \
    --output-format=json \
     |tee fio.randrw.dev.sdc.json

☩ cat fio.randrw.dev.sdc.json |jq '.jobs[0].read.iops + .jobs[0].write.iops'
69774.820336

@ WSL2 : /s

☩ sudo fio --name=randrw \
    --rw=randrw \
    --size=1G \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --ioengine=libaio \
    --filename=/s/fiotest \
    --group_reporting

...
  read:  IOPS=4698, BW=18.4MiB/s (19.2MB/s)(512MiB/27888msec)
  ...
  write: IOPS=4701, BW=18.4MiB/s (19.3MB/s)(512MiB/27888msec); 0 zone resets
  ...

NFS

@ NFS server

☩ k exec -it test-fio-pod -- fio --name=randrw \
    --rw=randrw \
    --size=1G \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --time_based \
    --ioengine=libaio \
    --group_reporting \
    --filename=192.168.11.100:/srv/nfs/k8s/default-test-fio-claim-pvc-6ec4b98e-2bac-4aec-a1f2-44dcbef828be \
    |grep -e read: -e write:
  read: IOPS=32.9k, BW=129MiB/s (135MB/s)(7713MiB/60001msec)
  write: IOPS=32.9k, BW=128MiB/s (135MB/s)(7705MiB/60001msec); 0 zone resets

Using NFS performance tuning : async,no_wdelay,fsid=0

@ Pod application : 10x performance degredation relative to server side.

☩ k exec -it test-fio-pod -- fio --name=randrw \
    --rw=randrw \
    --size=1G \
    --bs=4k \
    --iodepth=32 \
    --direct=1 \
    --runtime=60 \
    --time_based \
    --ioengine=libaio \
    --group_reporting \
    --filename=/mnt/fiotest \
    |grep -e read: -e write:

  read: IOPS=5617, BW=21.9MiB/s (23.0MB/s)(1317MiB/60003msec)
  write: IOPS=5610, BW=21.9MiB/s (23.0MB/s)(1315MiB/60003msec); 0 zone resets

@ NVMe : Sequential Read

@ Windows

C:\TEMP>fio.exe --name=seqread --filename=C:\testfile --size=1G --rw=read --bs=1M --iodepth=32 --runtime=60 --group_reporting
...
  read: IOPS=3038, BW=3039MiB/s (3186MB/s)(1024MiB/337msec)
...

del C:\testfile 

@ NVMe : Sequential Write

@ Linux

sudo fio --name=seqread --filename=/testfile --size=1G --rw=read --bs=1M --iodepth=32 --runtime=60 --group_reporting
rm /testfile