A benchmarking suite for evaluating and comparing PyTorch optimization algorithms on 2D mathematical functions.
- Benchmarks optimizers from the
pytorch_optimizerlibrary. - Uses Optuna for hyperparameter tuning.
- Generates trajectory visualizations for each optimizer and function.
- Presents performance rankings on a project website.
- Configurable via a
config.tomlfile.
This project provides a framework to evaluate and compare the performance of various PyTorch optimizers. It uses algorithms from pytorch_optimizer and performs hyperparameter searches with Optuna. The benchmark is run on a suite of standard 2D mathematical test functions, and the results, including optimization trajectories, are visualized and ranked.
Warning
Important Limitations: These benchmark results are based on synthetic 2D functions and may not reflect real-world performance when training actual neural networks. The rankings should only be used as a reference, not as definitive guidance for choosing optimizers in practical applications.
The optimizers are evaluated on the following standard 2D test functions. Click on a function's name to learn more about it.
| Function | Function |
|---|---|
| Ackley | Lévy N. 13 |
| Langermann | Eggholder |
| Gramacy & Lee | Griewank |
| Rastrigin | Rosenbrock |
| Weierstrass | Styblinski–Tang |
| Goldstein-Price | Gradient Labyrinth |
| Neural Canyon | Quantum Well |
| Beale |
The full benchmark results, including performance rankings and detailed trajectory plots for each optimizer, are available on the project website.
# Clone repository
git clone --depth 1 https://github.com/AidinHamedi/Optimizer-Benchmark.git
cd Optimizer-Benchmark
# Install dependencies
uv sync
# Run the benchmark
python runner.pyThe script will load settings from config.toml, run hyperparameter tuning for each optimizer, and save the results and visualizations to the ./results/ directory.
Contributions are welcome! In particular, I’m looking for help improving and expanding the web page.
If you’d like to contribute, please feel free to submit a pull request or open an issue to discuss your ideas.
-
Virtual Library of Simulation Experiments: Test Functions and Datasets for Optimization Algorithms. Source: Simon Fraser University https://www.sfu.ca/~ssurjano/optimization.html Curated by Derek Bingham — For inquiries: dbingham@stat.sfu.ca
-
Kim, H. (2021). pytorch_optimizer: optimizer & lr scheduler & loss function collections in PyTorch (Version 2.12.0) [Computer software]. https://github.com/kozistr/pytorch_optimizer
Copyright (c) 2025 Aidin Hamedi This software is released under the MIT License. https://opensource.org/licenses/MIT