Skip to content

intsystems/hippotrainer

Repository files navigation

HippoTrainer

HippoTrainer

Gradient-Based Hyperparameter Optimization for PyTorch 🦛

PyTorch Inspired by Optuna

Docs Tests Coverage

License Contributors Issues Pull Requests

HippoTrainer is a PyTorch-compatible library for gradient-based hyperparameter optimization, implementing cutting-edge algorithms that leverage automatic differentiation to efficiently tune hyperparameters.

📬 Assets

  1. Technical Meeting 1 - Presentation
  2. Technical Meeting 2 - Jupyter Notebook
  3. Technical Meeting 3 - Jupyter Notebook
  4. Documentation
  5. Tests
  6. Blog Post

🚀 Features

  • Algorithm Zoo: T1-T2, Neumann, HOAG, DrMAD
  • PyTorch Native: Direct integration with torch.nn.Module
  • Memory Efficient: Checkpointing & implicit differentiation
  • Scalable: From laptop to cluster with PyTorch backend

📜 Algorithms

  • T1-T2 (Paper): One-step unrolled optimization
  • Neumann (Paper): Leveraging Neumann series approximation for implicit differentiation
  • HOAG (Paper): Implicit differentiation via conjugate gradient
  • DrMAD (Paper): Memory-efficient piecewise-linear backpropagation

🛠️ Installation

Our python library hippotrainer supports multiple ways to be installed. You can choose the most suitable for your purposes.

We suggest to use installation from source (pip install git+https://github.com/intsystems/hippotrainer), if you want to get the latest package version.

Install using pip

pip install hippotrainer

Install from source

pip install git+https://github.com/intsystems/hippotrainer

Install via Git clone

git clone https://github.com/intsystems/hippotrainer
cd hippotrainer
pip install -e .

🚀 Usage

You can use our library to tune almost all (see below) hyperparameters in your own code. The HyperOptimizer interface is very similar to Optimizer from PyTorch.

It supports key functionalities:

  1. step to do an optimization step over parameters (or hyperparameters, see below)
  2. zero_grad to zero out the parameters gradients (same as optimizer.zero_grad())

We provide demo experiments with each implemented method in this notebook. They works as follows:

  1. Get next batch from train dataloader
  2. Forward and backward on calculated loss
  3. hyper_optimizer.step(loss) do model parameters step and (if inner steps were accumulated) hyperparameters step (calculate hypergradients, do the optimization step, zeroes hypergradients)
  4. hyper_optimizer.zero_grad() zeroes the model parameters gradients (same as optimizer.zero_grad())

Optimizer vs. HyperOptimizer method step

Gradient-based hyperparameters optimization involves hyper-optimization steps during the model parameters optimization. Thus, we combine Optimizer method step with inner_steps, defined by each method.

For example, T1T2 do NOT use any inner steps, therefore optimization over parameters and hyperparameters is done step by step. But Neumann method do some inner optimization steps over model parameters before it do the hyperstep.

See more details here.

Supported hyperparameters types

The HyperOptimizer logic is well-suited for almost all CONTINUOUS (required for gradient-based methods) hyperparameters types:

  1. Model hyperparameters (e.g., gate coefficients)
  2. Loss hyperparameters (e.g., L1/L2-regularization)

However, it currently does not support (or support, but actually was not sufficiently tested) learning rate tuning. We plan to improve our functionality in future releases, stay tuned!

🤝 Contributors

  • Daniil Dorin (Basic code writing, Final demo, Algorithms)
  • Igor Ignashin (Project wrapping, Documentation writing, Algorithms)
  • Nikita Kiselev (Project planning, Blog post, Algorithms)
  • Andrey Veprikov (Tests writing, Documentation writing, Algorithms)
  • We welcome contributions!

📄 License

HippoTrainer is MIT licensed. See LICENSE for details.

About

[BMM 24-25] HippoTrainer: Gradient-Based Hyperparameter Optimization

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •