|
| 1 | +# Sequel: A Continual Learning Library in PyTorch and JAX |
| 2 | + |
| 3 | +The goal of this library is to provide a simple and easy to use framework for continual learning. The library is written in PyTorch and JAX and provides a simple interface to run experiments. The library is still in development and we are working on adding more algorithms and datasets. |
| 4 | + |
| 5 | +- Documetation: https://nik-dim.github.io/sequel-site/ |
| 6 | +- Reproducibility Board: https://nik-dim.github.io/sequel-site/reproducibility/ |
| 7 | +- Weights&Biases: https://wandb.ai/nikdim/SequeL |
| 8 | +## Installation |
| 9 | + |
| 10 | +The library can be installed via pip: |
| 11 | +```bash |
| 12 | +pip install sequel-core |
| 13 | +``` |
| 14 | + |
| 15 | +Alternatively, you can install the library from source: |
| 16 | +```bash |
| 17 | +git clone https://github.com/nik-dim/sequel.git |
| 18 | +python3 -m build |
| 19 | +``` |
| 20 | + |
| 21 | +or use the library by cloning the repository. In order to use the library, you need to install the dependencies. This can be done via the `requirements.txt` file. We recommend to use a conda environment for this. The following commands will create a conda environment with the required packages and activate it: |
| 22 | +```bash |
| 23 | +# create the conda environment |
| 24 | +conda create -n sequel -y python=3.10 cuda cudatoolkit cuda-nvcc -c nvidia -c anaconda |
| 25 | +conda activate sequel |
| 26 | + |
| 27 | +# install all required packages |
| 28 | +pip install -r requirements.txt |
| 29 | + |
| 30 | +# Optional: Depending on the machine, the next command might be needed to enable CUDA support for GPUs |
| 31 | +pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html |
| 32 | +``` |
| 33 | + |
| 34 | + |
| 35 | +## Run an experiment |
| 36 | + |
| 37 | +For some examples, you can modify the `example_pytorch.py` and `example_jax.py` files, or run: |
| 38 | +```bash |
| 39 | +# example experiment on PyTorch |
| 40 | +python example_pytorch.py |
| 41 | + |
| 42 | +# ...and in JAX |
| 43 | +python example_jax.py |
| 44 | +``` |
| 45 | + |
| 46 | +Experiments are located in the `examples/` directory in `configs`. In order to run an experiment you simply do the following: |
| 47 | + |
| 48 | +```bash |
| 49 | +python main.py +experiment=EXPERIMENT_DIR/EXPERRIMENT |
| 50 | + |
| 51 | +# examples |
| 52 | +python main.py +examples=ewc_rotatedmnist mode=pytorch # or mode=jax |
| 53 | +python main.py +examples=mcsgd_rotatedmnist mode=pytorch # or mode=jax |
| 54 | +``` |
| 55 | + |
| 56 | +In order to create your own experiment you follow the template of the experiments in `configs/examples/`. You override the defaults so that e.g. another algorithm is selected and you specify the training details. To run multiple experiments with different configs, the `--multirun` flag of [Hydra](https://hydra.cc/docs) can be used. |
| 57 | +For instance: |
| 58 | +```bash |
| 59 | +python main.py --multirun +examples=ewc_rotatedmnist \ |
| 60 | + mode=pytorch optimizer.lr=0.01,0.001 \ |
| 61 | + benchmark.batch_size=128,256 \ |
| 62 | + training.epochs_per_task=1 # online setting |
| 63 | +``` |
0 commit comments