A comprehensive performance testing toolkit using FIO (Flexible I/O Tester) for benchmarking block devices and filesystems. This suite includes automated scripts for testing sequential read/write performance, finding optimal workload configurations, and conducting full-duplex performance analysis.
This repository contains three main testing tools:
- data-test.sh: Block device performance testing
- fs-test.sh: Filesystem performance testing with automatic scale-up phases
- fio_gen_meta.sh: Metadata-intensive random I/O testing
Before running the tests, ensure the following tools are installed:
- fio (Flexible I/O Tester)
- RHEL/CentOS/Fedora:
dnf install fio
- RHEL/CentOS/Fedora:
- bc (Basic Calculator)
- RHEL/CentOS/Fedora:
dnf install bc
- RHEL/CentOS/Fedora:
Tests block devices with support for single-host and multi-host distributed testing scenarios.
./data-test.sh <mode> <block_device> <duration> [mode_specific_args]Run all three test phases (write scale-up, read scale-up, and full duplex):
./data-test.sh full <device> <duration> [host_number] [total_hosts]Run manual full duplex test with fixed write and read jobs:
./data-test.sh fdx <device> <duration> <write_jobs> <read_jobs> [host_number] [total_hosts]Find maximum writers that maintain read performance with fixed readers:
./data-test.sh scale-writers <device> <duration> <read_jobs> [host_number] [total_hosts]Single-host testing:
# Run all test phases on /dev/sdb for 30 seconds
./data-test.sh full /dev/sdb 30
# Run full duplex with 8 write jobs and 4 read jobs
./data-test.sh fdx /dev/sdb 30 8 4
# Find optimal write scaling with 8 fixed read jobs
./data-test.sh scale-writers /dev/sdb 30 8Multi-host testing:
# Run full duplex test with 4 hosts total
# On host 0:
./data-test.sh fdx /dev/sdb 30 10 6 0 4
# On host 1:
./data-test.sh fdx /dev/sdb 30 10 6 1 4
# On host 2:
./data-test.sh fdx /dev/sdb 30 10 6 2 4
# On host 3:
./data-test.sh fdx /dev/sdb 30 10 6 3 4
# Arguments explained: fdx /dev/sdb 30 8 4 <host_number> <total_hosts>
# - 30: test duration in seconds
# - 10: number of write jobs (try to max out write performance)
# - 6: number of read jobs (use a representative number of tape drives per host, readers are limited to 400MB/s each)
# - 0-3: unique host number (0-indexed)
# - 4: total number of hosts participatingIn multi-host mode, each host receives a unique offset (host_number × 500G) to avoid data overlap when multiple systems test the same shared storage device simultaneously.
Default test parameters:
- Block size: 1M
- I/O engine: libaio
- I/O depth: 32
- Job range: 4-32 jobs
- Direct I/O: Enabled
Runs a comprehensive 3-phase filesystem performance test to identify peak throughput and optimal workload configurations.
./fs-test.sh <test_directory> <duration> <reader_jobs>- Phase 1: Write Scale-Up - Find peak write throughput by incrementally increasing concurrent write jobs
- Phase 2: Read Scale-Up - Find peak read throughput by incrementally increasing concurrent read jobs
- Phase 3: Full Duplex - With fixed readers at 400MB/s rate limit, scale writers until maximum write throughput or readers drop below 365MB/s
# Test filesystem at /mnt/testfs for 30 seconds with 8 reader jobs
./fs-test.sh /mnt/testfs 30 8
# Test filesystem at /data/perf for 60 seconds with 16 reader jobs
./fs-test.sh /data/perf 60 16Default test parameters:
- Block size: 1M
- I/O engine: libaio
- I/O depth: 32
- File size: 10G per job
- Job range: 4-32 jobs
- O_DIRECT: Configurable (disabled by default for filesystem compatibility)
Tests block devices with random I/O patterns designed to simulate metadata-intensive workloads. This script distributes jobs evenly across the device using calculated offsets to avoid contention.
./fio_gen_meta.sh <device> <num_jobs> <read|write>- device: Block device to test (e.g., /dev/sdb)
- num_jobs: Number of parallel FIO jobs to spawn
- read|write: Operation type (randread or randwrite)
# Run 8 random read jobs on /dev/sdb
./fio_gen_meta.sh /dev/sdb 8 read
# Run 16 random write jobs on /dev/nvme0n1
./fio_gen_meta.sh /dev/nvme0n1 16 writeTest parameters optimized for metadata operations:
- Block size: 64k (smaller blocks for metadata simulation)
- I/O engine: libaio
- I/O depth: 1 (low queue depth typical of metadata operations)
- Runtime: 30 seconds
- Direct I/O: Enabled
- Job distribution: Evenly spaced offsets across device
- Automatic offset calculation: Jobs are evenly distributed across the device size to minimize interference
- Per-job logging: Individual logs for each job enable detailed analysis
- Summary output: Aggregated results displayed at completion
Creates test.fio configuration file and test.fio.log output in the current directory.
Both scripts generate detailed logs in timestamped directories:
- Individual test results for each job count
- Aggregated performance data
- Full FIO output for analysis
Test files are automatically cleaned up after completion.
- data-test.sh: Creates
fio_logs_<timestamp>/directory - fs-test.sh: Creates
fio_logs_fs_<timestamp>/directory and temporaryfio_test_<timestamp>/test files
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Start with shorter duration tests (30-60 seconds) to quickly identify performance characteristics
- For production benchmarking, use longer durations (300+ seconds) for stable results
- Monitor system resources during testing to identify bottlenecks
- Use multi-host mode for testing shared storage systems under distributed load
- Ensure sufficient storage space for test files (10G × max_jobs × number of test phases)