Keran Lia, Wen Laib,*
aState Key Laboratory of Critical Earth Material Cycling and Mineral Deposits, Frontiers Science Center for Critical Earth Material Cycling, School of Earth Sciences and Engineering, Nanjing University, Nanjing, 210023, China
bGannan Normal University
*Corresponding authors
FastMeasure is a professional tool for processing rock microscopic images, automatically detecting and segmenting grains. This project is inspired by and builds upon segmenteverygrain by Zoltán Sylvester. FastMeasure introduces YOLO-based detection, multiple SAM variants, automatic scale detection, and enhanced geometric analysis.. Based on deep learning technology, the system supports two model combinations: YOLO+FastSAM and YOLO+MobileSAM, combined with intelligent scale bar detection and rich geometric parameter calculation, enabling precise extraction of grain information from rock microscopic images and generation of complete statistical analysis reports.
This project is inspired by and builds upon segmenteverygrain by Zoltán Sylvester. We appreciate the excellent work done by the segmenteverygrain team in developing a U-Net + SAM based grain segmentation solution for geomorphology and sedimentary geology research.
While segmenteverygrain pioneered the use of SAM for grain segmentation, FastMeasure takes a different approach and introduces several enhancements:
| Feature | segmenteverygrain | FastMeasure |
|---|---|---|
| Detection Model | U-Net (patch-based CNN) | YOLO (real-time object detection) |
| SAM Variants | SAM 2.1 only | FastSAM + MobileSAM |
| Processing Speed | ~2.5 min for 3MP image | ~0.3s for FastSAM (GPU) |
| Scale Calibration | Manual (Shift+drag) | Automatic + Manual calibration |
| Geometric Parameters | Basic shape metrics | 10+ parameters including fractal dimension, angularity |
| Interactive Mode | Jupyter notebook based | Standalone GUI with unified key controls |
| Batch Processing | Notebook-based | Command-line batch processing |
| Model Fine-tuning | U-Net (TensorFlow) | YOLO (Ultralytics, easier) |
| Training Data | Manual annotation | Auto from interactive results |
| Code Structure | Notebook + modules | Modular core library with CLI |
The system supports three usage modes:
- Auto Processing Mode: YOLO detection + SAM auto segmentation
- Batch Processing Mode: Batch processing of all images in a folder
- Interactive Mode: Manual point selection for fine segmentation via GUI
Similar to segmenteverygrain's U-Net fine-tuning, FastMeasure supports YOLO model fine-tuning to improve detection accuracy on your specific rock types:
# Quick fine-tune from interactive segmentation results
python train_yolo.py --mode quick --input results/mobilesam/interactive/
# The fine-tuned model can then be used for better detectionSee Model Training Guide below for detailed instructions.
| Model Combination | Features | Applicable Scenarios |
|---|---|---|
| YOLO + FastSAM | Fast, lightweight | Large batch quick processing |
| YOLO + MobileSAM | High precision, supports interaction | High precision requirements, interactive annotation |
- Automatically recognize red scale bar at bottom-right corner of images
- Calculate conversion factor from pixels to actual microns
- Support custom scale bar length configuration
- Automatic grain detection and segmentation
- Intelligent grain numbering and area labeling
- Support custom labeling styles (font, color, outline, etc.)
The system can calculate various grain geometric parameters:
- Basic Parameters: Area, perimeter, centroid coordinates, bounding rectangle
- Shape Parameters: Circularity, Aspect Ratio, Rectangularity
- Structural Parameters: Compactness, Roundness, Convexity
- Advanced Parameters: Fractal Dimension, Angularity
config.yaml/config_mobilesam.yaml: Main configuration files (model paths, processing parameters, output settings)geometry_config.yaml: Geometric parameter configuration file (custom CSV export fields)
- Python 3.8+
- PyTorch
- CUDA (recommended for GPU acceleration)
-
Clone this repository to local:
git clone https://github.com/KeranLi/FastMeasure.git cd FastMeasure -
Create and activate virtual environment (recommend using
conda):conda create -n rockseg python=3.8 conda activate rockseg
-
Install dependencies:
pip install torch torchvision opencv-python pandas matplotlib numpy pyyaml ultralytics shapely scikit-image pillow # MobileSAM interactive mode requires additional installation pip install mobile_sam -
Prepare model files:
- YOLO model:
./models/best_yolo_20260107.pt(FastSAM workflow) or./models/best.pt(MobileSAM workflow) - FastSAM model:
./models/FastSAM-s.pt - MobileSAM model:
./models/mobile_sam.pt
- YOLO model:
The project provides a unified entry script run.py to start FastSAM or MobileSAM:
# FastSAM processing
python run.py fastsam --input path/to/image.tif
# MobileSAM batch processing
python run.py mobilesam --input path/to/folder --batch
# Interactive mode
python run.py mobilesam --interactivepython run_fastsam.py --input path/to/image.tif
# Or use unified entry
python run.py fastsam --input path/to/image.tifpython run_fastsam.py --input path/to/folder --batch
# Or use unified entry
python run.py fastsam --input path/to/folder --batchpython run_fastsam.py --config custom_config.yaml --input image.tifpython run_fastsam.py --input image.tif --conf 0.3 --min-area 50 --output my_resultspython run_mobilesam.py
# Or use unified entry
python run.py mobilesamFollow prompts to select processing mode and input parameters.
python run_mobilesam.py --input path/to/image.tif
# Or use unified entry
python run.py mobilesam --input path/to/image.tifpython run_mobilesam.py --input path/to/folder --batch
# Or use unified entry
python run.py mobilesam --input path/to/folder --batchpython run_mobilesam.py --interactive
# Or use unified entry
python run.py mobilesam --interactive| Key/Operation | Function |
|---|---|
| Left click | Add foreground point (segmentation target) |
| Right click | Add background point (exclusion area) |
X |
Delete last grain |
D |
Delete all grains |
S |
Save results |
Shift+S |
Quick save complete results |
C |
Clear all point marks |
R |
Reset interface |
M |
Manual scale calibration (measure known length) |
H |
Show help |
Q |
Quit |
When automatic scale bar detection fails, you can manually calibrate the scale:
- Press
Mkey to enter scale calibration mode - Click the start point of a known-length line (e.g., scale bar, ruler)
- Click the end point of the line
- Enter the actual length in microns when prompted
- The system will calculate and store the scale factor (um/px)
This allows you to use any known-length feature in the image for calibration.
The system is fully compatible with macOS. However, please note:
- First run may be slower due to model loading
- Interactive mode requires a display (not supported on remote SSH without X11)
- File dialogs run on main thread to ensure macOS compatibility
FastMeasure includes a YOLO Fine-tuning Module (train_yolo.py) that allows you to improve detection accuracy on your specific rock types, similar to segmenteverygrain's U-Net fine-tuning capability.
- Better Accuracy: YOLO models trained on generic datasets may miss specific grain types in your samples
- Adapt to New Rock Types: Fine-tune on your own thin-section images for best results
- Iterative Improvement: Use interactive mode results as training data
The easiest way to fine-tune is using your interactive segmentation results:
# Step 1: Generate some training data using interactive mode
python run.py mobilesam --interactive
# Segment several images and save the results
# Step 2: Fine-tune YOLO using those results
python train_yolo.py --mode quick --input results/mobilesam/interactive/ --epochs 50
# Step 3: Use the fine-tuned model
cp training_outputs/runs/train_*/weights/best.pt ./models/my_finetuned_yolo.pt
# Update config.yaml: yolo: "./models/my_finetuned_yolo.pt"| Option | Description | Default |
|---|---|---|
--mode |
Training mode: quick (from interactive results) or train (from dataset) |
quick |
--input |
Directory containing interactive mode results | Required for quick |
--data |
Path to YOLO-format dataset YAML | Required for train |
--base |
Base model: yolov8n/s/m/l/x.pt or path to .pt file |
yolov8n.pt |
--epochs |
Number of training epochs | 50 |
--imgsz |
Input image size | 1024 |
--batch |
Batch size (reduce if out of memory) | 8 |
--device |
Device: auto, cpu, cuda, mps |
auto |
# Fine-tune from existing model with more epochs
python train_yolo.py --mode quick \
--input results/mobilesam/interactive/ \
--base ./models/best_yolo_20260107.pt \
--epochs 100 \
--imgsz 1024
# Use larger model for better accuracy (slower)
python train_yolo.py --mode quick \
--input results/interactive/ \
--base yolov8m.pt \
--epochs 50
# Train with custom YOLO-format dataset
python train_yolo.py --mode train \
--data ./my_grain_dataset/dataset.yaml \
--epochs 200If you have existing annotations, you can create a YOLO-format dataset:
dataset/
├── images/
│ ├── train/
│ ├── val/
│ └── test/
├── labels/
│ ├── train/
│ ├── val/
│ └── test/
└── dataset.yaml
dataset.yaml format:
path: /path/to/dataset
train: images/train
val: images/val
test: images/test
nc: 1
names: ['grain']# Model path configuration
model_paths:
yolo: "./models/best_yolo_20260107.pt" # YOLO model path
fastsam: "./models/FastSAM-s.pt" # FastSAM model path
device: "cpu" # Running device: cpu or cuda
# Scale bar detection configuration
scale_detection:
enabled: true
known_length_um: 1000.0 # Scale bar actual length (microns)
# Processing parameter configuration
processing:
yolo_confidence: 0.25 # YOLO detection confidence threshold
min_area: 30 # Minimum grain area (pixels)
remove_edge_grains: false # Whether to remove edge grains
# Output configuration
output:
root_dir: "results" # Result output directory
save_visualization: true # Save visualization results
save_statistics: true # Save CSV statistics file
save_summary: true # Save JSON summaryNote:
config.yamlis used for FastSAM mode (default: CPU)config_mobilesam.yamlis used for MobileSAM mode (default: CPU)- Change
devicetocudaif you have NVIDIA GPU and CUDA installed
grain_statistics_csv:
enabled: true
# Columns finally written to CSV (output in this order)
keep_columns:
- label
- area
- perimeter
- circularity
- aspect_ratio
- compactness
- roundness
- area_um2
- diameter_umAfter processing is complete, the system generates the following files in the output directory:
| File Name | Description |
|---|---|
segmentation_result.png |
Segmentation result visualization (with grain contours) |
segmentation_labeled.png |
Labeled result image (with grain numbers and areas) |
segmentation_mask.png |
Binary segmentation mask image |
grain_statistics.csv |
Grain statistics data table |
summary.json |
Processing summary information (JSON format) |
performance.json |
Performance statistics information |
All results are now organized under a unified results/ directory:
results/
├── fastsam/ # FastSAM results
│ ├── auto/ # Automatic processing results
│ │ └── [image_name]/
│ │ ├── segmentation_result.png
│ │ ├── segmentation_labeled.png
│ │ ├── segmentation_mask.png
│ │ ├── grain_statistics.csv
│ │ └── summary.json
│ └── interactive/ # Interactive processing results
│ └── [timestamp]/
│ └── ...
├── mobilesam/ # MobileSAM results
│ ├── auto/ # Automatic processing results
│ └── interactive/ # Interactive processing results
├── logs/ # Unified log directory
│ ├── fastsam/ # FastSAM logs
│ └── mobilesam/ # MobileSAM logs
└── temp/ # Temporary files and cache
Benefits:
- All results in one place - easy to find and manage
- Clear separation between modes (FastSAM/MobileSAM) and types (auto/interactive)
- Unified logs for easier debugging
- No scattered result folders in project root
.
├── run.py # Unified entry script (new)
├── run_fastsam.py # FastSAM startup script
├── run_mobilesam.py # MobileSAM startup script (supports interactive mode)
├── train_yolo.py # YOLO model training/fine-tuning script (new)
├── mobilesam_interactive.py # MobileSAM standalone interactive tool
├── config.yaml # FastSAM configuration file
├── config_mobilesam.yaml # MobileSAM configuration file
├── geometry_config.yaml # Geometric parameter configuration file
│
├── core/ # Core module (new)
│ ├── __init__.py # Core module initialization
│ ├── seg_tools.py # Shared tool functions
│ ├── seg_optimize.py # Shared segmentation optimization
│ ├── cli_base.py # Shared CLI functions
│ ├── scale_calibration.py # Manual scale calibration (new)
│ └── yolo_trainer.py # YOLO fine-tuning module (new)
│
├── fastsam/ # FastSAM module
│ ├── rock_fastsam_system.py # FastSAM main system
│ ├── yolo_fastsam.py # YOLO+FastSAM pipeline
│ ├── seg_engine.py # Segmentation engine
│ ├── seg_optimize.py # Segmentation optimization (compatibility wrapper)
│ └── seg_tools.py # Tool functions (compatibility wrapper)
│
├── mobilesam/ # MobileSAM module
│ ├── rock_mobilesam_system.py # MobileSAM main system
│ ├── yolo_mobilesam.py # YOLO+MobileSAM pipeline
│ ├── mobile_sam_engine.py # MobileSAM engine
│ ├── seg_optimize.py # Segmentation optimization (compatibility wrapper)
│ └── seg_tools.py # Tool functions (compatibility wrapper)
│
├── geometry/ # Geometric parameter calculation module
│ ├── grain_metric.py # Grain shape parameter calculation
│ ├── config_loader.py # Config loader
│ └── export_csv.py # CSV export utility
│
├── scale_detector.py # Scale bar detection module
├── grain_marker.py # Grain labeling module
├── models/ # Model files directory
├── results/ # Default output directory
└── Boulder_20260107/ # Test data example
Performance tests based on RTX 3060 graphics card:
| Model | GPU Inference | CPU Inference | Speed Comparison |
|---|---|---|---|
| FastSAM | ~77ms | ~294ms | CPU is ~4x GPU |
| MobileSAM | ~3.7s | ~101s | CPU is ~26x GPU |
Recommendation: For large batch processing, GPU acceleration is recommended; for small batches or testing, CPU mode can be used.
| Package | Purpose |
|---|---|
torch |
Deep learning framework |
ultralytics |
YOLOv8 and SAM models |
opencv-python |
Image processing and scale bar detection |
pandas |
Data processing and statistics |
matplotlib |
Result visualization |
numpy |
Numerical computation |
pyyaml |
Configuration file parsing |
shapely |
Geometric calculation |
scikit-image |
Image processing tools |
mobile_sam |
MobileSAM library (required for interactive mode) |
Q: What to do if scale bar detection fails?
A: Check if there is a clear red scale bar at the bottom-right corner of the image, or adjust color threshold parameters like red_lower1/red_upper1 in the configuration file.
Q: How to adjust detection sensitivity?
A: Modify the yolo_confidence parameter in the configuration file (smaller values mean more sensitive detection but may introduce noise).
Q: Interactive mode cannot start GUI?
A: Ensure the system has GUI support, or try setting the environment variable MPLBACKEND=TkAgg.
See CHANGELOG.md for detailed update content of each project version.
Contributions are welcome! If you have improvement suggestions or find issues, you can contribute code by submitting an issue or pull request.
This project builds upon the excellent work of segmenteverygrain by Zoltán Sylvester and colleagues. We thank them for pioneering the application of SAM in sedimentary grain segmentation and for making their work open-source.
Key improvements in FastMeasure:
- YOLO-based Detection: Replaced patch-based U-Net with YOLO for real-time grain detection
- Multiple SAM Backends: Support for both FastSAM (speed) and MobileSAM (precision)
- Automatic Scale Detection: Intelligent red scale bar recognition at image corners
- Enhanced Geometric Analysis: 10+ grain shape parameters including fractal dimension
- Unified Architecture: Modular core library with command-line interface
- Cross-Platform: Full macOS and Linux/Windows support
FastMeasure can be packaged as a standalone executable for Windows, allowing users to run it without installing Python.
# Install PyInstaller
pip install pyinstaller# Run the build script
python build_exe.pyThis will:
- Clean previous builds
- Package all Python dependencies
- Include model configs and core modules
- Create
dist/FastMeasure/folder with executable
# Build one-directory (recommended, faster startup)
pyinstaller --name FastMeasure \
--windowed \
--onedir \
--add-data "core;core" \
--add-data "fastsam;fastsam" \
--add-data "mobilesam;mobilesam" \
--add-data "geometry;geometry" \
--add-data "config.yaml;." \
--hidden-import ultralytics \
--hidden-import torch \
gui_launcher.pyAfter building:
dist/
└── FastMeasure/
├── FastMeasure.exe # Main executable
├── models/ # Place model files here
├── results/ # Output directory
└── _internal/ # Python libraries
Before distributing:
- Download model files (see Model Files)
- Place in
dist/FastMeasure/models/ - Zip the entire
FastMeasure/folder - Share the zip file with users
- Install Inno Setup
- Open
installer.issin Inno Setup Compiler - Build to create
FastMeasure_Setup.exe
- Executable size: ~500MB-1GB (includes Python + PyTorch)
- Startup time: First launch may take 10-30 seconds (model loading)
- CPU mode: The executable defaults to CPU mode for compatibility
- Model files: Not included in build (too large), must be downloaded separately