Skip to content

versity/fio-workload

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FIO Workload Testing Suite

A comprehensive performance testing toolkit using FIO (Flexible I/O Tester) for benchmarking block devices and filesystems. This suite includes automated scripts for testing sequential read/write performance, finding optimal workload configurations, and conducting full-duplex performance analysis.

Overview

This repository contains three main testing tools:

  • data-test.sh: Block device performance testing
  • fs-test.sh: Filesystem performance testing with automatic scale-up phases
  • fio_gen_meta.sh: Metadata-intensive random I/O testing

Prerequisites

Before running the tests, ensure the following tools are installed:

  • fio (Flexible I/O Tester)
    • RHEL/CentOS/Fedora: dnf install fio
  • bc (Basic Calculator)
    • RHEL/CentOS/Fedora: dnf install bc

data-test.sh - Block Device Testing

Tests block devices with support for single-host and multi-host distributed testing scenarios.

Usage

./data-test.sh <mode> <block_device> <duration> [mode_specific_args]

Modes

1. Full Mode

Run all three test phases (write scale-up, read scale-up, and full duplex):

./data-test.sh full <device> <duration> [host_number] [total_hosts]

2. Full Duplex (FDX) Mode

Run manual full duplex test with fixed write and read jobs:

./data-test.sh fdx <device> <duration> <write_jobs> <read_jobs> [host_number] [total_hosts]

3. Scale Writers Mode

Find maximum writers that maintain read performance with fixed readers:

./data-test.sh scale-writers <device> <duration> <read_jobs> [host_number] [total_hosts]

Examples

Single-host testing:

# Run all test phases on /dev/sdb for 30 seconds
./data-test.sh full /dev/sdb 30

# Run full duplex with 8 write jobs and 4 read jobs
./data-test.sh fdx /dev/sdb 30 8 4

# Find optimal write scaling with 8 fixed read jobs
./data-test.sh scale-writers /dev/sdb 30 8

Multi-host testing:

# Run full duplex test with 4 hosts total
# On host 0:
./data-test.sh fdx /dev/sdb 30 10 6 0 4

# On host 1:
./data-test.sh fdx /dev/sdb 30 10 6 1 4

# On host 2:
./data-test.sh fdx /dev/sdb 30 10 6 2 4

# On host 3:
./data-test.sh fdx /dev/sdb 30 10 6 3 4

# Arguments explained: fdx /dev/sdb 30 8 4 <host_number> <total_hosts>
# - 30: test duration in seconds
# - 10: number of write jobs (try to max out write performance)
# - 6: number of read jobs (use a representative number of tape drives per host, readers are limited to 400MB/s each)
# - 0-3: unique host number (0-indexed)
# - 4: total number of hosts participating

Multi-Host Mode

In multi-host mode, each host receives a unique offset (host_number × 500G) to avoid data overlap when multiple systems test the same shared storage device simultaneously.

Configuration

Default test parameters:

  • Block size: 1M
  • I/O engine: libaio
  • I/O depth: 32
  • Job range: 4-32 jobs
  • Direct I/O: Enabled

fs-test.sh - Filesystem Testing

Runs a comprehensive 3-phase filesystem performance test to identify peak throughput and optimal workload configurations.

Usage

./fs-test.sh <test_directory> <duration> <reader_jobs>

Test Phases

  1. Phase 1: Write Scale-Up - Find peak write throughput by incrementally increasing concurrent write jobs
  2. Phase 2: Read Scale-Up - Find peak read throughput by incrementally increasing concurrent read jobs
  3. Phase 3: Full Duplex - With fixed readers at 400MB/s rate limit, scale writers until maximum write throughput or readers drop below 365MB/s

Examples

# Test filesystem at /mnt/testfs for 30 seconds with 8 reader jobs
./fs-test.sh /mnt/testfs 30 8

# Test filesystem at /data/perf for 60 seconds with 16 reader jobs
./fs-test.sh /data/perf 60 16

Configuration

Default test parameters:

  • Block size: 1M
  • I/O engine: libaio
  • I/O depth: 32
  • File size: 10G per job
  • Job range: 4-32 jobs
  • O_DIRECT: Configurable (disabled by default for filesystem compatibility)

fio_gen_meta.sh - Metadata-Intensive Random I/O Testing

Tests block devices with random I/O patterns designed to simulate metadata-intensive workloads. This script distributes jobs evenly across the device using calculated offsets to avoid contention.

Usage

./fio_gen_meta.sh <device> <num_jobs> <read|write>

Parameters

  • device: Block device to test (e.g., /dev/sdb)
  • num_jobs: Number of parallel FIO jobs to spawn
  • read|write: Operation type (randread or randwrite)

Examples

# Run 8 random read jobs on /dev/sdb
./fio_gen_meta.sh /dev/sdb 8 read

# Run 16 random write jobs on /dev/nvme0n1
./fio_gen_meta.sh /dev/nvme0n1 16 write

Configuration

Test parameters optimized for metadata operations:

  • Block size: 64k (smaller blocks for metadata simulation)
  • I/O engine: libaio
  • I/O depth: 1 (low queue depth typical of metadata operations)
  • Runtime: 30 seconds
  • Direct I/O: Enabled
  • Job distribution: Evenly spaced offsets across device

Features

  • Automatic offset calculation: Jobs are evenly distributed across the device size to minimize interference
  • Per-job logging: Individual logs for each job enable detailed analysis
  • Summary output: Aggregated results displayed at completion

Test Output

Creates test.fio configuration file and test.fio.log output in the current directory.

Test Output

Both scripts generate detailed logs in timestamped directories:

  • Individual test results for each job count
  • Aggregated performance data
  • Full FIO output for analysis

Test files are automatically cleaned up after completion.

Output Directories

  • data-test.sh: Creates fio_logs_<timestamp>/ directory
  • fs-test.sh: Creates fio_logs_fs_<timestamp>/ directory and temporary fio_test_<timestamp>/ test files

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Tips

  • Start with shorter duration tests (30-60 seconds) to quickly identify performance characteristics
  • For production benchmarking, use longer durations (300+ seconds) for stable results
  • Monitor system resources during testing to identify bottlenecks
  • Use multi-host mode for testing shared storage systems under distributed load
  • Ensure sufficient storage space for test files (10G × max_jobs × number of test phases)

About

fio tests to generate sample workloads

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages