Skip to content

A benchmarking suite for evaluating PyTorch optimization algorithms on 2D mathematical functions (optimizer benchmark)

License

Notifications You must be signed in to change notification settings

AidinHamedi/Optimizer-Benchmark

Repository files navigation

Optimizer Benchmark

Deploy Benchmark Site

A benchmarking suite for evaluating and comparing PyTorch optimization algorithms on 2D mathematical functions.

🌟 Highlights

  • Benchmarks optimizers from the pytorch_optimizer library.
  • Uses Optuna for hyperparameter tuning.
  • Generates trajectory visualizations for each optimizer and function.
  • Presents performance rankings on a project website.
  • Configurable via a config.toml file.

ℹ️ Overview

This project provides a framework to evaluate and compare the performance of various PyTorch optimizers. It uses algorithms from pytorch_optimizer and performs hyperparameter searches with Optuna. The benchmark is run on a suite of standard 2D mathematical test functions, and the results, including optimization trajectories, are visualized and ranked.

Warning

Important Limitations: These benchmark results are based on synthetic 2D functions and may not reflect real-world performance when training actual neural networks. The rankings should only be used as a reference, not as definitive guidance for choosing optimizers in practical applications.

📌 Benchmark Functions

The optimizers are evaluated on the following standard 2D test functions. Click on a function's name to learn more about it.

Function Function
Ackley Lévy N. 13
Langermann Eggholder
Gramacy & Lee Griewank
Rastrigin Rosenbrock
Weierstrass Styblinski–Tang
Goldstein-Price Gradient Labyrinth
Neural Canyon Quantum Well
Beale

📊 Results & Visualizations

The full benchmark results, including performance rankings and detailed trajectory plots for each optimizer, are available on the project website.

🚀 Quick Start

# Clone repository
git clone --depth 1 https://github.com/AidinHamedi/Optimizer-Benchmark.git
cd Optimizer-Benchmark

# Install dependencies
uv sync

# Run the benchmark
python runner.py

The script will load settings from config.toml, run hyperparameter tuning for each optimizer, and save the results and visualizations to the ./results/ directory.

🤝 Contributing

Contributions are welcome! In particular, I’m looking for help improving and expanding the web page.

If you’d like to contribute, please feel free to submit a pull request or open an issue to discuss your ideas.

📚 References

📝 License

 Copyright (c) 2025 Aidin Hamedi

 This software is released under the MIT License.
 https://opensource.org/licenses/MIT