diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
deleted file mode 100644
index 51de3a9..0000000
--- a/.pre-commit-config.yaml
+++ /dev/null
@@ -1,43 +0,0 @@
-repos:
- - repo: https://github.com/pre-commit/pre-commit-hooks
- rev: v4.4.0 # Use the ref you want to point at
- hooks:
- - id: trailing-whitespace
- - id: check-ast
- - id: check-builtin-literals
- - id: check-docstring-first
- - id: check-executables-have-shebangs
- - id: debug-statements
- - id: end-of-file-fixer
- - id: mixed-line-ending
- args: [--fix=lf]
- - id: requirements-txt-fixer
- - id: check-yaml
- - id: check-toml
-
- - repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.1.5
- hooks:
- - id: ruff
- args: [--fix]
- types_or: [python, jupyter]
-
- - repo: https://github.com/psf/black
- rev: 23.7.0
- hooks:
- - id: black
-
- - repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.6.1
- hooks:
- - id: mypy
- entry: python -m mypy --show-error-codes --pretty --config-file pyproject.toml
-
- - repo: https://github.com/nbQA-dev/nbQA
- rev: 1.7.0
- hooks:
- - id: nbqa-black
- - id: nbqa-ruff
- args: [--fix]
-
-exclude: 'icgan/.*|rcdm/.*'
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index e67db36..0000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Contributing to AI Engineering Projects
-
-Thanks for your interest in contributing!
-
-To submit PRs, please fill out the PR template along with the PR. If the PR
-fixes an issue, don't forget to link the PR to the issue!
-
-## Pre-commit hooks
-
-Once the python virtual environment is setup, you can run pre-commit hooks using:
-
-```bash
-pre-commit run --all-files
-```
-
-## Coding guidelines
-
-For code style, we recommend the [google style guide](https://google.github.io/styleguide/pyguide.html).
-
-Pre-commit hooks apply the [black](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html)
-code formatting.
-
-For docstrings we use [numpy format](https://numpydoc.readthedocs.io/en/latest/format.html).
-
-We also use [flake8](https://flake8.pycqa.org/en/latest/) and [pylint](https://pylint.pycqa.org/en/stable/)
-for further static code analysis. The pre-commit hooks show errors which you need
-to fix before submitting a PR.
-
-Last but not the least, we use type hints in our code which is then checked using
-[mypy](https://mypy.readthedocs.io/en/stable/). Currently, mypy checks are not
-strict, but will be enforced more as the API code becomes more stable.
\ No newline at end of file
diff --git a/README.md b/README.md
index 7fe5fd3..f581f28 100644
--- a/README.md
+++ b/README.md
@@ -1,20 +1,29 @@
# Generative SSL
-This is the PyTorch implemention of our paper **"Can Generative Models Improve Self-Supervised Representation Learning?"** submitted to ECCV 2024 for reproducing the experiments.
+This repository contains the PyTorch implementation of **"Can Generative Models Improve Self-Supervised Representation Learning?"** accepted to AAAI 2025.
+
+## Abstract
+
+Self-supervised learning (SSL) holds significant promise in leveraging unlabeled data for learning robust visual representations. However, the limited diversity and quality of existing augmentation techniques constrain SSL performance. We introduce a novel framework that incorporates generative models to produce semantically consistent and diverse augmentations conditioned on source images. This approach enriches SSL training, improving downstream task performance by up to 10\% in Top-1 accuracy across various techniques.
+
+
+Our augmentation pipeline utilizes generative models, i.e., Stable Diffusion or ICGAN, conditioned on the source image representation, accompanied by the standard SSL augmentations. The components inside the Generative Augmentation module, i.e. the pretrained SSL encoder and the generative model remain frozen throughout the SSL training process.
+
## Requirements
-To create the virtual environment for running the experiments, you need to run:
+We used solo-learn library for the implementation of SSL method. You can find the library in this [LINK](https://github.com/vturrisi/solo-learn).
-`pip install -r requirements.txt`
+To create the virtual environment for running the experiments please first:
-**Note:**
-**You always need to set the proper path to the virtual environment, the dataset and the model in each SLURM file before submitting the job. Here are the options for the datasets and models that we used in our experiments:**
+`cd solo-learn`
+
+Then install requirements based on solo-learn library documentation [here](https://github.com/vturrisi/solo-learn?tab=readme-ov-file#installation).
-- **Datasets:** ImageNet, iNaturalist2018, Food101, Places365, CIFAR10/100
-- **Models:** Baseline (SimSiam model trained on ImageNet), SimSiam model trained with ICGAN augmentations, SimSiam model trained with Stable Diffusion augmentations
## Data Generation
+**Note:**
+**You always need to set the proper path to the virtual environment and path to save generated data in generation scripts.**
To generate augmentations with ICGAN run:
@@ -24,24 +33,23 @@ To generate augmentations with Stable Diffusion run:
`sbatch GenerativeSSL/scripts/generation_scripts/gen_img_stablediff.slrm`
-## Training
-
-To train the SimSiam method on the ImageNet, run:
-
-`sbatch GenerativeSSL/scripts/train_scrpits/train_simsiam_singlenode.slrm`
+## Training and Evaluation
+**Note:**
+**You always need to set the proper path to the virtual environment in solo-learn slrm files. We pretrained our models on train split of Imagenet. Here is the model and dataset choices for the evaluation that we used in our experiments:**
-In this file, there is a `use_synthetic_data` flag that you can use to train the model with augmentations. You just need to specify the path to synthetic data. (Either ICGAN or Stable Diffusion augmentations) By default, the `use_synthetic_data` flag has been passed in the SLURM file.
+- **Datasets:** ImageNet, iNaturalist2018, Food101, Places365, CIFAR10/100
+- **Models:** SimCLR (Baseline, ICGAN, Stablediff), SimSiam (Baseline, ICGAN, Stablediff), MoCo (Baseline, ICGAN, Stablediff), BYOL (Baseline, ICGAN, Stablediff), Barlow Twins (Baseline, ICGAN, Stablediff)
-## Evaluation
+### Training
-For downstream tasks, there are all evaluation scripts in this `GenerativeSSL/scripts/eval_scripts` folder. In each dataset folder in `eval_scripts` there are three SLURM files. (baseline model, model trained with ICGAN aug, model trained with stablediff aug)
+Configs for training are in the `solo-learn/scripts/pretrain` folder. You can find the config files for each model and dataset in the respective folders. You need to set **path for the dataset** and **dir to save model** in each respective config file before submitting the job. By choosing the desired config you can train the methods on the ImageNet, run:
-Similarly for evaluation, you just need to submit the slurm file related to the dataset you want. Again, you need to specify the path to the virtual environment, the dataset and the related checkpoint in each SLURM file. For example, command below run the experiment of evaluating model trained with stable diffusion augmentations on Food101:
+`sbatch scripts/solo_learn/train_solo_learn.slrm`
-`sbatch GenerativeSSL/scripts/eval_scripts/food101/stablediff.slrm`
-## Pretrained Models
-We also provide the checkpoints for all the trained models here in the [LINK](https://drive.google.com/drive/folders/1xPIbf1cOPqzIzuZ185GjAprA8XmQ0Tvu)
+### Evaluation
+Configs for evaluation are in the `solo-learn/scripts/linear` folder. You can find the config files for each model and dataset in the respective folders. You need to set **path for the dataset**, **dir to save model** and **path to pretrained feature extractor** in each respective config file before submitting the job. By choosing the desired config you can train the methods on the ImageNet, run:
+`sbatch scripts/solo_learn/eval_solo_learn.slrm`
diff --git a/images/GenSSL_last-main.png b/images/GenSSL_last-main.png
new file mode 100644
index 0000000..67c9691
Binary files /dev/null and b/images/GenSSL_last-main.png differ
diff --git a/pyproject.toml b/pyproject.toml
deleted file mode 100644
index 1c14a08..0000000
--- a/pyproject.toml
+++ /dev/null
@@ -1,56 +0,0 @@
-[build-system]
-requires = ["setuptools", "wheel"]
-build-backend = "setuptools.build_meta"
-
-[tool.black]
-line-length = 88
-
-[tool.mypy]
-ignore_missing_imports = true
-pretty = true
-
-[tool.ruff]
-lint.select = [
- "A", # flake8-builtins
- "B", # flake8-bugbear
- "COM", # flake8-commas
- "C4", # flake8-comprehensions
- "RET", # flake8-return
- "SIM", # flake8-simplify
- "ICN", # flake8-import-conventions
- "Q", # flake8-quotes
- "RSE", # flake8-raise
- "D", # pydocstyle
- "E", # pycodestyle
- "F", # pyflakes
- "I", # isort
- "W", # pycodestyle
- "N", # pep8-naming
- "ERA", # eradicate
- "PL", # pylint
-]
-lint.ignore = [
- "E501", # line length violation
- "C901", # `function_name` is too complex
- "PLR0913", # Too many arguments
- "PLR2004", # Magic value used in comparison
-]
-line-length = 88
-
-# Ignore import violations in all `__init__.py` files.
-[tool.ruff.lint.per-file-ignores]
-"__init__.py" = ["E402", "F401", "F403", "F811"]
-
-[tool.ruff.lint.isort]
-lines-after-imports = 2
-
-[tool.ruff.lint.pycodestyle]
-max-doc-length = 88
-
-[tool.ruff.lint.pydocstyle]
-convention = "numpy"
-
-[tool.pytest.ini_options]
-pythonpath = [
- "."
-]
\ No newline at end of file
diff --git a/requirements.txt b/requirements.txt
deleted file mode 100644
index e484c81..0000000
--- a/requirements.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-black
-flake8
-isort
-mypy
-pre-commit
-pytest
-pytest-cov
-toml
-types-requests
-types-setuptools
diff --git a/scripts/eval_scripts/CIFAR10/baseline.slrm b/scripts/eval_scripts/CIFAR10/baseline.slrm
deleted file mode 100644
index 3ecb7a4..0000000
--- a/scripts/eval_scripts/CIFAR10/baseline.slrm
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="cifar"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=slurm-cifar10_baseline_160_%j.out
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_baseline_seed43_bs128_rforig_2024-03-05-12-27/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="cifar10" \
- --num_classes=10
diff --git a/scripts/eval_scripts/CIFAR10/icgan.slrm b/scripts/eval_scripts/CIFAR10/icgan.slrm
deleted file mode 100644
index f4bf503..0000000
--- a/scripts/eval_scripts/CIFAR10/icgan.slrm
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="cifar"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=slurm-cifar10_baseline_160_%j.out
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_icgan_seed43_bs128_rforig_2024-03-05-12-52/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="cifar10" \
- --num_classes=10
diff --git a/scripts/eval_scripts/CIFAR10/stablediff.slrm b/scripts/eval_scripts/CIFAR10/stablediff.slrm
deleted file mode 100644
index 64361fb..0000000
--- a/scripts/eval_scripts/CIFAR10/stablediff.slrm
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="cifar"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=slurm-cifar10_baseline_160_%j.out
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_stablediff_p0p5_seed43_2024-03-05-13-39/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="cifar10" \
- --num_classes=10
diff --git a/scripts/eval_scripts/CIFAR100/baseline.slrm b/scripts/eval_scripts/CIFAR100/baseline.slrm
deleted file mode 100644
index a68be76..0000000
--- a/scripts/eval_scripts/CIFAR100/baseline.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="cifar"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=cifar100_baseline_160_%j.out
-#SBATCH --error=cifar100_baseline_160_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_baseline_seed43_bs128_rforig_2024-03-05-12-27/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="cifar100" \
- --num_classes=100
\ No newline at end of file
diff --git a/scripts/eval_scripts/CIFAR100/icgan.slrm b/scripts/eval_scripts/CIFAR100/icgan.slrm
deleted file mode 100644
index 98a2125..0000000
--- a/scripts/eval_scripts/CIFAR100/icgan.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="cifar"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=cifar100_baseline_160_%j.out
-#SBATCH --error=cifar100_baseline_160_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_icgan_seed43_bs128_rforig_2024-03-05-12-52/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="cifar100" \
- --num_classes=100
\ No newline at end of file
diff --git a/scripts/eval_scripts/CIFAR100/stablediff.slrm b/scripts/eval_scripts/CIFAR100/stablediff.slrm
deleted file mode 100644
index 9f6d928..0000000
--- a/scripts/eval_scripts/CIFAR100/stablediff.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="cifar"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=cifar100_baseline_160_%j.out
-#SBATCH --error=cifar100_baseline_160_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_stablediff_p0p5_seed43_2024-03-05-13-39/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="cifar100" \
- --num_classes=100
\ No newline at end of file
diff --git a/scripts/eval_scripts/INaturalist/baseline.slrm b/scripts/eval_scripts/INaturalist/baseline.slrm
deleted file mode 100644
index e68bef7..0000000
--- a/scripts/eval_scripts/INaturalist/baseline.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="inaturalist"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=inaturalist_baseline_%j.out
-#SBATCH --error=inaturalist_baseline_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/datasets/inat_comp/2018/" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_baseline_seed43_bs128_rforig_2024-03-05-12-27/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="INaturalist" \
- --num_classes=8142
\ No newline at end of file
diff --git a/scripts/eval_scripts/INaturalist/icgan.slrm b/scripts/eval_scripts/INaturalist/icgan.slrm
deleted file mode 100644
index 2341e6f..0000000
--- a/scripts/eval_scripts/INaturalist/icgan.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="inaturalist"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=inaturalist_baseline_%j.out
-#SBATCH --error=inaturalist_baseline_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/datasets/inat_comp/2018/" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_icgan_seed43_bs128_rforig_2024-03-05-12-52/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="INaturalist" \
- --num_classes=8142
\ No newline at end of file
diff --git a/scripts/eval_scripts/INaturalist/stablediff.slrm b/scripts/eval_scripts/INaturalist/stablediff.slrm
deleted file mode 100644
index 29f1159..0000000
--- a/scripts/eval_scripts/INaturalist/stablediff.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="inaturalist"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=inaturalist_baseline_%j.out
-#SBATCH --error=inaturalist_baseline_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/datasets/inat_comp/2018/" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_stablediff_p0p5_seed43_2024-03-05-13-39/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="INaturalist" \
- --num_classes=8142
\ No newline at end of file
diff --git a/scripts/eval_scripts/food101/baseline.slrm b/scripts/eval_scripts/food101/baseline.slrm
deleted file mode 100644
index f9f5fdf..0000000
--- a/scripts/eval_scripts/food101/baseline.slrm
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="food101"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=slurm-food101_baseline_160_%j.out
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_baseline_seed43_bs128_rforig_2024-03-05-12-27/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="food101" \
- --num_classes=101
\ No newline at end of file
diff --git a/scripts/eval_scripts/food101/icgan.slrm b/scripts/eval_scripts/food101/icgan.slrm
deleted file mode 100644
index c31f3a5..0000000
--- a/scripts/eval_scripts/food101/icgan.slrm
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="food101"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=slurm-food101_baseline_160_%j.out
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="P/projects/imagenet_synthetic/model_checkpoints/simsiam_icgan_seed43_bs128_rforig_2024-03-05-12-52/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="food101" \
- --num_classes=101
\ No newline at end of file
diff --git a/scripts/eval_scripts/food101/stablediff.slrm b/scripts/eval_scripts/food101/stablediff.slrm
deleted file mode 100644
index a30522b..0000000
--- a/scripts/eval_scripts/food101/stablediff.slrm
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="food101"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=slurm-food101_baseline_160_%j.out
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_stablediff_p0p5_seed43_2024-03-05-13-39/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="food101" \
- --num_classes=101
\ No newline at end of file
diff --git a/scripts/eval_scripts/imagenet/baseline.slrm b/scripts/eval_scripts/imagenet/baseline.slrm
deleted file mode 100644
index 11417ec..0000000
--- a/scripts/eval_scripts/imagenet/baseline.slrm
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="imagenet_eval"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=imagenet_baseline_%j.out
-#SBATCH --error=imagenet_baseline_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/scratch/ssd004/datasets/imagenet256" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars --batch-size=2048 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_baseline_seed43_bs128_rforig_2024-03-05-12-27/checkpoint_0160.pth.tar" \
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT"
diff --git a/scripts/eval_scripts/imagenet/icgan.slrm b/scripts/eval_scripts/imagenet/icgan.slrm
deleted file mode 100644
index e68050d..0000000
--- a/scripts/eval_scripts/imagenet/icgan.slrm
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="imagenet_eval"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=imagenet_baseline_%j.out
-#SBATCH --error=imagenet_baseline_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/scratch/ssd004/datasets/imagenet256" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars --batch-size=2048 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_icgan_seed43_bs128_rforig_2024-03-05-12-52/checkpoint_0160.pth.tar" \
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT"
diff --git a/scripts/eval_scripts/imagenet/stablediff.slrm b/scripts/eval_scripts/imagenet/stablediff.slrm
deleted file mode 100644
index 37c85c5..0000000
--- a/scripts/eval_scripts/imagenet/stablediff.slrm
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="imagenet_eval"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=imagenet_baseline_%j.out
-#SBATCH --error=imagenet_baseline_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/scratch/ssd004/datasets/imagenet256" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars --batch-size=2048 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_stablediff_p0p5_seed43_2024-03-05-13-39/checkpoint_0160.pth.tar" \
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT"
diff --git a/scripts/eval_scripts/places365/baseline.slrm b/scripts/eval_scripts/places365/baseline.slrm
deleted file mode 100644
index a619037..0000000
--- a/scripts/eval_scripts/places365/baseline.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="places365"
-#SBATCH --partition=rtx6000
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=places365_baseline_160_%j.out
-#SBATCH --error=places365_baseline_160_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets/places365" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_baseline_seed43_bs128_rforig_2024-03-05-12-27/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="places365" \
- --num_classes=434
\ No newline at end of file
diff --git a/scripts/eval_scripts/places365/icgan.slrm b/scripts/eval_scripts/places365/icgan.slrm
deleted file mode 100644
index 84a9317..0000000
--- a/scripts/eval_scripts/places365/icgan.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="places365"
-#SBATCH --partition=rtx6000
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=places365_baseline_160_%j.out
-#SBATCH --error=places365_baseline_160_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets/places365" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_icgan_seed43_bs128_rforig_2024-03-05-12-52/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="places365" \
- --num_classes=434
\ No newline at end of file
diff --git a/scripts/eval_scripts/places365/stablediff.slrm b/scripts/eval_scripts/places365/stablediff.slrm
deleted file mode 100644
index 8985fae..0000000
--- a/scripts/eval_scripts/places365/stablediff.slrm
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="places365"
-#SBATCH --partition=rtx6000
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=places365_baseline_160_%j.out
-#SBATCH --error=places365_baseline_160_%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-echo $MASTER_ADDR
-echo $MASTER_PORT
-
-export PYTHONPATH="."
-nvidia-smi
-
-python simsiam/linear_eval.py \
- --data="/projects/imagenet_synthetic/fereshteh_datasets/places365" \
- --arch="resnet50" \
- --multiprocessing-distributed \
- --lars \
- --batch-size=4096 \
- --epochs=100 \
- -j=16 \
- --world-size 1 \
- --rank 0 \
- --pretrained="/projects/imagenet_synthetic/model_checkpoints/simsiam_stablediff_p0p5_seed43_2024-03-05-13-39/checkpoint_0160.pth.tar"\
- --dist-url "tcp://$MASTER_ADDR:$MASTER_PORT" \
- --dataset_name="places365" \
- --num_classes=434
\ No newline at end of file
diff --git a/scripts/generation_scripts/gen_img_icgan.slrm b/scripts/generation_scripts/gen_img_icgan.slrm
index 6741e60..2e9e29e 100644
--- a/scripts/generation_scripts/gen_img_icgan.slrm
+++ b/scripts/generation_scripts/gen_img_icgan.slrm
@@ -14,7 +14,7 @@
PY_ARGS=${@:1}
# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
+source YOUR_VENV_PATH/bin/activate
export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
export CUDA_LAUNCH_BLOCKING=1
@@ -26,7 +26,7 @@ export PYTHONPATH="."
nvidia-smi
srun python data_generation/img2img_icgan.py \
---outdir /projects/imagenet_synthetic/synthetic_icgan \
+--outdir SAVE_DIR \
--num_shards=7 \
--shard_index=2 \
--image_version=1 \
diff --git a/scripts/generation_scripts/gen_img_stablediff.slrm b/scripts/generation_scripts/gen_img_stablediff.slrm
index 6113e05..87852af 100644
--- a/scripts/generation_scripts/gen_img_stablediff.slrm
+++ b/scripts/generation_scripts/gen_img_stablediff.slrm
@@ -14,7 +14,7 @@
PY_ARGS=${@:1}
# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
+source YOUR_VENV_PATH/bin/activate
export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
export CUDA_LAUNCH_BLOCKING=1
@@ -26,7 +26,7 @@ export PYTHONPATH="."
nvidia-smi
srun python data_generation/img2img_stable_diff.py \
---outdir /projects/imagenet_synthetic/arashaf_stablediff_batched \
+--outdir SAVE_DIR \
--num_shards=7 \
--shard_index=2 \
--image_version=1 \
diff --git a/scripts/solo_learn/eval_solo_learn.slrm b/scripts/solo_learn/eval_solo_learn.slrm
index 51a5731..0a3666b 100644
--- a/scripts/solo_learn/eval_solo_learn.slrm
+++ b/scripts/solo_learn/eval_solo_learn.slrm
@@ -1,8 +1,7 @@
#!/bin/bash
-#SBATCH --job-name="eval_simsiam_single"
-#SBATCH --partition=a40
-#SBATCH --qos=a40_arashaf
+#SBATCH --job-name="eval_simclr_single"
+#SBATCH --qos=m
#SBATCH --nodes=1
#SBATCH --gres=gpu:a40:4
#SBATCH --ntasks-per-node=4
@@ -15,7 +14,7 @@
#SBATCH --time=12:00:00
# load virtual environment
-source /ssd003/projects/aieng/envs/genssl3/bin/activate
+source YOUR_VENV_PATH/bin/activate
export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
diff --git a/scripts/solo_learn/train_solo_learn.slrm b/scripts/solo_learn/train_solo_learn.slrm
index fbe9102..9ac8d76 100644
--- a/scripts/solo_learn/train_solo_learn.slrm
+++ b/scripts/solo_learn/train_solo_learn.slrm
@@ -1,8 +1,7 @@
#!/bin/bash
#SBATCH --job-name="simclr_single_train"
-#SBATCH --partition=a40
-#SBATCH --qos=a40_arashaf
+#SBATCH --qos=m
#SBATCH --nodes=1
#SBATCH --gres=gpu:a40:4
#SBATCH --ntasks-per-node=4
@@ -15,7 +14,7 @@
#SBATCH --time=72:00:00
# load virtual environment
-source /ssd003/projects/aieng/envs/genssl3/bin/activate
+source YOUR_VENV_PATH/bin/activate
export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
diff --git a/scripts/solo_learn/train_synth_solo_learn.slrm b/scripts/solo_learn/train_synth_solo_learn.slrm
deleted file mode 100644
index 4f11386..0000000
--- a/scripts/solo_learn/train_synth_solo_learn.slrm
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="simclr_single_train"
-#SBATCH --partition=a40
-#SBATCH --qos=a40_arashaf
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:a40:4
-#SBATCH --ntasks-per-node=4
-#SBATCH --cpus-per-task=8
-#SBATCH --mem=0
-#SBATCH --output=singlenode-%j.out
-#SBATCH --error=singlenode-%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=72:00:00
-
-# load virtual environment
-source /ssd003/projects/aieng/envs/genssl3/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-
-export PYTHONPATH="."
-nvidia-smi
-
-torchrun --nproc-per-node=4 --nnodes=1 solo-learn/main_pretrain.py \
- --config-path scripts/pretrain/imagenet/ \
- --config-name simclr_synthetic.yaml
\ No newline at end of file
diff --git a/scripts/train_scrpits/train_simsiam_multinode.slrm b/scripts/train_scrpits/train_simsiam_multinode.slrm
deleted file mode 100644
index 0d4d55c..0000000
--- a/scripts/train_scrpits/train_simsiam_multinode.slrm
+++ /dev/null
@@ -1,57 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="simsiam_multi_train"
-#SBATCH --partition=a40
-#SBATCH --account=deadline
-#SBATCH --qos=deadline
-#SBATCH --nodes=2
-#SBATCH --gres=gpu:a40:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=01:00:00
-#SBATCH --cpus-per-task=4
-#SBATCH --mem-per-cpu=8G
-#SBATCH --output=slurm-%j.out
-#SBATCH --error=slurm-%j.err
-# load virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-
-
-export MASTER_ADDR="$(hostname --fqdn)"
-export MASTER_PORT="$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1])')"
-export RDVZ_ID=$RANDOM
-echo "RDZV Endpoint $MASTER_ADDR:$MASTER_PORT"
-
-export PYTHONPATH="."
-nvidia-smi
-
-srun -p $SLURM_JOB_PARTITION \
- -c $SLURM_CPUS_ON_NODE \
- -N $SLURM_JOB_NUM_NODES \
- --mem=0 \
- --gres=gpu:$SLURM_JOB_PARTITION:$SLURM_GPUS_ON_NODE \
- bash -c 'torchrun \
- --nproc-per-node=$SLURM_GPUS_ON_NODE \
- --nnodes=$SLURM_JOB_NUM_NODES \
- --rdzv-endpoint $MASTER_ADDR:$MASTER_PORT \
- --rdzv-id $RDVZ_ID \
- --rdzv-backend c10d \
- simsiam/train_simsiam.py.py \
- -a resnet50 \
- --fix-pred-lr \
- --distributed_mode \
- --batch-size=128 \
- --epochs=200 \
- --experiment="simsiam_icgan_seed43_bs128_rforig" \
- --resume_from_checkpoint="/projects/imagenet_synthetic/model_checkpoints/_original_simsiam/checkpoint_0099.pth.tar" \
- --seed=43 \
- --use_synthetic_data \
- --synthetic_data_dir="/projects/imagenet_synthetic/synthetic_icgan" \
- --synthetic_index_min=0 \
- --synthetic_index_max=4 \
- --generative_augmentation_prob=0.5'
\ No newline at end of file
diff --git a/scripts/train_scrpits/train_simsiam_singlenode.slrm b/scripts/train_scrpits/train_simsiam_singlenode.slrm
deleted file mode 100644
index 4be266e..0000000
--- a/scripts/train_scrpits/train_simsiam_singlenode.slrm
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name="simsiam_single_train"
-#SBATCH --partition=a40
-#SBATCH --qos=deadline
-#SBATCH --account=deadline
-#SBATCH --nodes=1
-#SBATCH --gres=gpu:a40:4
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=32
-#SBATCH --mem=0
-#SBATCH --output=singlenode-%j.out
-#SBATCH --error=singlenode-%j.err
-#SBATCH --open-mode=append
-#SBATCH --wait-all-nodes=1
-#SBATCH --time=12:00:00
-
-# activate virtual environment
-source /ssd003/projects/aieng/envs/genssl2/bin/activate
-
-export NCCL_IB_DISABLE=1 # Our cluster does not have InfiniBand. We need to disable usage using this flag.
-export TORCH_NCCL_ASYNC_ERROR_HANDLING=1 # set to 1 for NCCL backend
-# export CUDA_LAUNCH_BLOCKING=1
-
-export PYTHONPATH="."
-nvidia-smi
-
-torchrun --nproc-per-node=4 --nnodes=1 simsiam/train_simsiam.py \
- -a resnet50 \
- --fix-pred-lr \
- --distributed_mode \
- --batch-size=128 \
- --epochs=100 \
- --experiment="simsiam_stablediff_p0p5_seed43" \
- --resume_from_checkpoint="" \
- --seed=43 \
- --use_synthetic_data \
- --synthetic_data_dir="/projects/imagenet_synthetic/arashaf_stablediff_batched" \
- --synthetic_index_min=0 \
- --synthetic_index_max=9 \
- --generative_augmentation_prob=0.5
\ No newline at end of file
diff --git a/simsiam/LARC.py b/simsiam/LARC.py
deleted file mode 100644
index fe41b13..0000000
--- a/simsiam/LARC.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import torch
-from torch import nn
-from torch.nn.parameter import Parameter
-
-
-class LARC(object):
- """
- :class:`LARC` is a pytorch implementation of both the scaling and clipping variants of LARC,
- in which the ratio between gradient and parameter magnitudes is used to calculate an adaptive
- local learning rate for each individual parameter. The algorithm is designed to improve
- convergence of large batch training.
-
- See https://arxiv.org/abs/1708.03888 for calculation of the local learning rate.
- In practice it modifies the gradients of parameters as a proxy for modifying the learning rate
- of the parameters. This design allows it to be used as a wrapper around any torch.optim Optimizer.
- ```
- model = ...
- optim = torch.optim.Adam(model.parameters(), lr=...)
- optim = LARC(optim)
- ```
- It can even be used in conjunction with apex.fp16_utils.FP16_optimizer.
- ```
- model = ...
- optim = torch.optim.Adam(model.parameters(), lr=...)
- optim = LARC(optim)
- optim = apex.fp16_utils.FP16_Optimizer(optim)
- ```
- Args:
- optimizer: Pytorch optimizer to wrap and modify learning rate for.
- trust_coefficient: Trust coefficient for calculating the lr. See https://arxiv.org/abs/1708.03888
- clip: Decides between clipping or scaling mode of LARC. If `clip=True` the learning rate is set to `min(optimizer_lr, local_lr)` for each parameter. If `clip=False` the learning rate is set to `local_lr*optimizer_lr`.
- eps: epsilon kludge to help with numerical stability while calculating adaptive_lr
- """
-
- def __init__(self, optimizer, trust_coefficient=0.02, clip=True, eps=1e-8):
- self.optim = optimizer
- self.trust_coefficient = trust_coefficient
- self.eps = eps
- self.clip = clip
-
- def __getstate__(self):
- return self.optim.__getstate__()
-
- def __setstate__(self, state):
- self.optim.__setstate__(state)
-
- @property
- def state(self):
- return self.optim.state
-
- def __repr__(self):
- return self.optim.__repr__()
-
- @property
- def param_groups(self):
- return self.optim.param_groups
-
- @param_groups.setter
- def param_groups(self, value):
- self.optim.param_groups = value
-
- def state_dict(self):
- return self.optim.state_dict()
-
- def load_state_dict(self, state_dict):
- self.optim.load_state_dict(state_dict)
-
- def zero_grad(self):
- self.optim.zero_grad()
-
- def add_param_group(self, param_group):
- self.optim.add_param_group(param_group)
-
- def step(self):
- with torch.no_grad():
- weight_decays = []
- for group in self.optim.param_groups:
- # absorb weight decay control from optimizer
- weight_decay = group["weight_decay"] if "weight_decay" in group else 0
- weight_decays.append(weight_decay)
- group["weight_decay"] = 0
- for p in group["params"]:
- if p.grad is None:
- continue
- param_norm = torch.norm(p.data)
- grad_norm = torch.norm(p.grad.data)
-
- if param_norm != 0 and grad_norm != 0:
- # calculate adaptive lr + weight decay
- adaptive_lr = (
- self.trust_coefficient
- * (param_norm)
- / (grad_norm + param_norm * weight_decay + self.eps)
- )
-
- # clip learning rate for LARC
- if self.clip:
- # calculation of adaptive_lr so that when multiplied by lr it equals `min(adaptive_lr, lr)`
- adaptive_lr = min(adaptive_lr / group["lr"], 1)
-
- p.grad.data += weight_decay * p.data
- p.grad.data *= adaptive_lr
-
- self.optim.step()
- # return weight decay control to optimizer
- for i, group in enumerate(self.optim.param_groups):
- group["weight_decay"] = weight_decays[i]
diff --git a/simsiam/LICENSE b/simsiam/LICENSE
deleted file mode 100644
index 105a4fb..0000000
--- a/simsiam/LICENSE
+++ /dev/null
@@ -1,399 +0,0 @@
-Attribution-NonCommercial 4.0 International
-
-=======================================================================
-
-Creative Commons Corporation ("Creative Commons") is not a law firm and
-does not provide legal services or legal advice. Distribution of
-Creative Commons public licenses does not create a lawyer-client or
-other relationship. Creative Commons makes its licenses and related
-information available on an "as-is" basis. Creative Commons gives no
-warranties regarding its licenses, any material licensed under their
-terms and conditions, or any related information. Creative Commons
-disclaims all liability for damages resulting from their use to the
-fullest extent possible.
-
-Using Creative Commons Public Licenses
-
-Creative Commons public licenses provide a standard set of terms and
-conditions that creators and other rights holders may use to share
-original works of authorship and other material subject to copyright
-and certain other rights specified in the public license below. The
-following considerations are for informational purposes only, are not
-exhaustive, and do not form part of our licenses.
-
- Considerations for licensors: Our public licenses are
- intended for use by those authorized to give the public
- permission to use material in ways otherwise restricted by
- copyright and certain other rights. Our licenses are
- irrevocable. Licensors should read and understand the terms
- and conditions of the license they choose before applying it.
- Licensors should also secure all rights necessary before
- applying our licenses so that the public can reuse the
- material as expected. Licensors should clearly mark any
- material not subject to the license. This includes other CC-
- licensed material, or material used under an exception or
- limitation to copyright. More considerations for licensors:
- wiki.creativecommons.org/Considerations_for_licensors
-
- Considerations for the public: By using one of our public
- licenses, a licensor grants the public permission to use the
- licensed material under specified terms and conditions. If
- the licensor's permission is not necessary for any reason--for
- example, because of any applicable exception or limitation to
- copyright--then that use is not regulated by the license. Our
- licenses grant only permissions under copyright and certain
- other rights that a licensor has authority to grant. Use of
- the licensed material may still be restricted for other
- reasons, including because others have copyright or other
- rights in the material. A licensor may make special requests,
- such as asking that all changes be marked or described.
- Although not required by our licenses, you are encouraged to
- respect those requests where reasonable. More_considerations
- for the public:
- wiki.creativecommons.org/Considerations_for_licensees
-
-=======================================================================
-
-Creative Commons Attribution-NonCommercial 4.0 International Public
-License
-
-By exercising the Licensed Rights (defined below), You accept and agree
-to be bound by the terms and conditions of this Creative Commons
-Attribution-NonCommercial 4.0 International Public License ("Public
-License"). To the extent this Public License may be interpreted as a
-contract, You are granted the Licensed Rights in consideration of Your
-acceptance of these terms and conditions, and the Licensor grants You
-such rights in consideration of benefits the Licensor receives from
-making the Licensed Material available under these terms and
-conditions.
-
-Section 1 -- Definitions.
-
- a. Adapted Material means material subject to Copyright and Similar
- Rights that is derived from or based upon the Licensed Material
- and in which the Licensed Material is translated, altered,
- arranged, transformed, or otherwise modified in a manner requiring
- permission under the Copyright and Similar Rights held by the
- Licensor. For purposes of this Public License, where the Licensed
- Material is a musical work, performance, or sound recording,
- Adapted Material is always produced where the Licensed Material is
- synched in timed relation with a moving image.
-
- b. Adapter's License means the license You apply to Your Copyright
- and Similar Rights in Your contributions to Adapted Material in
- accordance with the terms and conditions of this Public License.
-
- c. Copyright and Similar Rights means copyright and/or similar rights
- closely related to copyright including, without limitation,
- performance, broadcast, sound recording, and Sui Generis Database
- Rights, without regard to how the rights are labeled or
- categorized. For purposes of this Public License, the rights
- specified in Section 2(b)(1)-(2) are not Copyright and Similar
- Rights.
- d. Effective Technological Measures means those measures that, in the
- absence of proper authority, may not be circumvented under laws
- fulfilling obligations under Article 11 of the WIPO Copyright
- Treaty adopted on December 20, 1996, and/or similar international
- agreements.
-
- e. Exceptions and Limitations means fair use, fair dealing, and/or
- any other exception or limitation to Copyright and Similar Rights
- that applies to Your use of the Licensed Material.
-
- f. Licensed Material means the artistic or literary work, database,
- or other material to which the Licensor applied this Public
- License.
-
- g. Licensed Rights means the rights granted to You subject to the
- terms and conditions of this Public License, which are limited to
- all Copyright and Similar Rights that apply to Your use of the
- Licensed Material and that the Licensor has authority to license.
-
- h. Licensor means the individual(s) or entity(ies) granting rights
- under this Public License.
-
- i. NonCommercial means not primarily intended for or directed towards
- commercial advantage or monetary compensation. For purposes of
- this Public License, the exchange of the Licensed Material for
- other material subject to Copyright and Similar Rights by digital
- file-sharing or similar means is NonCommercial provided there is
- no payment of monetary compensation in connection with the
- exchange.
-
- j. Share means to provide material to the public by any means or
- process that requires permission under the Licensed Rights, such
- as reproduction, public display, public performance, distribution,
- dissemination, communication, or importation, and to make material
- available to the public including in ways that members of the
- public may access the material from a place and at a time
- individually chosen by them.
-
- k. Sui Generis Database Rights means rights other than copyright
- resulting from Directive 96/9/EC of the European Parliament and of
- the Council of 11 March 1996 on the legal protection of databases,
- as amended and/or succeeded, as well as other essentially
- equivalent rights anywhere in the world.
-
- l. You means the individual or entity exercising the Licensed Rights
- under this Public License. Your has a corresponding meaning.
-
-Section 2 -- Scope.
-
- a. License grant.
-
- 1. Subject to the terms and conditions of this Public License,
- the Licensor hereby grants You a worldwide, royalty-free,
- non-sublicensable, non-exclusive, irrevocable license to
- exercise the Licensed Rights in the Licensed Material to:
-
- a. reproduce and Share the Licensed Material, in whole or
- in part, for NonCommercial purposes only; and
-
- b. produce, reproduce, and Share Adapted Material for
- NonCommercial purposes only.
-
- 2. Exceptions and Limitations. For the avoidance of doubt, where
- Exceptions and Limitations apply to Your use, this Public
- License does not apply, and You do not need to comply with
- its terms and conditions.
-
- 3. Term. The term of this Public License is specified in Section
- 6(a).
-
- 4. Media and formats; technical modifications allowed. The
- Licensor authorizes You to exercise the Licensed Rights in
- all media and formats whether now known or hereafter created,
- and to make technical modifications necessary to do so. The
- Licensor waives and/or agrees not to assert any right or
- authority to forbid You from making technical modifications
- necessary to exercise the Licensed Rights, including
- technical modifications necessary to circumvent Effective
- Technological Measures. For purposes of this Public License,
- simply making modifications authorized by this Section 2(a)
- (4) never produces Adapted Material.
-
- 5. Downstream recipients.
-
- a. Offer from the Licensor -- Licensed Material. Every
- recipient of the Licensed Material automatically
- receives an offer from the Licensor to exercise the
- Licensed Rights under the terms and conditions of this
- Public License.
-
- b. No downstream restrictions. You may not offer or impose
- any additional or different terms or conditions on, or
- apply any Effective Technological Measures to, the
- Licensed Material if doing so restricts exercise of the
- Licensed Rights by any recipient of the Licensed
- Material.
-
- 6. No endorsement. Nothing in this Public License constitutes or
- may be construed as permission to assert or imply that You
- are, or that Your use of the Licensed Material is, connected
- with, or sponsored, endorsed, or granted official status by,
- the Licensor or others designated to receive attribution as
- provided in Section 3(a)(1)(A)(i).
-
- b. Other rights.
-
- 1. Moral rights, such as the right of integrity, are not
- licensed under this Public License, nor are publicity,
- privacy, and/or other similar personality rights; however, to
- the extent possible, the Licensor waives and/or agrees not to
- assert any such rights held by the Licensor to the limited
- extent necessary to allow You to exercise the Licensed
- Rights, but not otherwise.
-
- 2. Patent and trademark rights are not licensed under this
- Public License.
-
- 3. To the extent possible, the Licensor waives any right to
- collect royalties from You for the exercise of the Licensed
- Rights, whether directly or through a collecting society
- under any voluntary or waivable statutory or compulsory
- licensing scheme. In all other cases the Licensor expressly
- reserves any right to collect such royalties, including when
- the Licensed Material is used other than for NonCommercial
- purposes.
-
-Section 3 -- License Conditions.
-
-Your exercise of the Licensed Rights is expressly made subject to the
-following conditions.
-
- a. Attribution.
-
- 1. If You Share the Licensed Material (including in modified
- form), You must:
-
- a. retain the following if it is supplied by the Licensor
- with the Licensed Material:
-
- i. identification of the creator(s) of the Licensed
- Material and any others designated to receive
- attribution, in any reasonable manner requested by
- the Licensor (including by pseudonym if
- designated);
-
- ii. a copyright notice;
-
- iii. a notice that refers to this Public License;
-
- iv. a notice that refers to the disclaimer of
- warranties;
-
- v. a URI or hyperlink to the Licensed Material to the
- extent reasonably practicable;
-
- b. indicate if You modified the Licensed Material and
- retain an indication of any previous modifications; and
-
- c. indicate the Licensed Material is licensed under this
- Public License, and include the text of, or the URI or
- hyperlink to, this Public License.
-
- 2. You may satisfy the conditions in Section 3(a)(1) in any
- reasonable manner based on the medium, means, and context in
- which You Share the Licensed Material. For example, it may be
- reasonable to satisfy the conditions by providing a URI or
- hyperlink to a resource that includes the required
- information.
-
- 3. If requested by the Licensor, You must remove any of the
- information required by Section 3(a)(1)(A) to the extent
- reasonably practicable.
-
- 4. If You Share Adapted Material You produce, the Adapter's
- License You apply must not prevent recipients of the Adapted
- Material from complying with this Public License.
-
-Section 4 -- Sui Generis Database Rights.
-
-Where the Licensed Rights include Sui Generis Database Rights that
-apply to Your use of the Licensed Material:
-
- a. for the avoidance of doubt, Section 2(a)(1) grants You the right
- to extract, reuse, reproduce, and Share all or a substantial
- portion of the contents of the database for NonCommercial purposes
- only;
-
- b. if You include all or a substantial portion of the database
- contents in a database in which You have Sui Generis Database
- Rights, then the database in which You have Sui Generis Database
- Rights (but not its individual contents) is Adapted Material; and
-
- c. You must comply with the conditions in Section 3(a) if You Share
- all or a substantial portion of the contents of the database.
-
-For the avoidance of doubt, this Section 4 supplements and does not
-replace Your obligations under this Public License where the Licensed
-Rights include other Copyright and Similar Rights.
-
-Section 5 -- Disclaimer of Warranties and Limitation of Liability.
-
- a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
- EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
- AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
- ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
- IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
- WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
- PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
- ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
- KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
- ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
-
- b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
- TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
- NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
- INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
- COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
- USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
- ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
- DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
- IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
-
- c. The disclaimer of warranties and limitation of liability provided
- above shall be interpreted in a manner that, to the extent
- possible, most closely approximates an absolute disclaimer and
- waiver of all liability.
-
-Section 6 -- Term and Termination.
-
- a. This Public License applies for the term of the Copyright and
- Similar Rights licensed here. However, if You fail to comply with
- this Public License, then Your rights under this Public License
- terminate automatically.
-
- b. Where Your right to use the Licensed Material has terminated under
- Section 6(a), it reinstates:
-
- 1. automatically as of the date the violation is cured, provided
- it is cured within 30 days of Your discovery of the
- violation; or
-
- 2. upon express reinstatement by the Licensor.
-
- For the avoidance of doubt, this Section 6(b) does not affect any
- right the Licensor may have to seek remedies for Your violations
- of this Public License.
-
- c. For the avoidance of doubt, the Licensor may also offer the
- Licensed Material under separate terms or conditions or stop
- distributing the Licensed Material at any time; however, doing so
- will not terminate this Public License.
-
- d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
- License.
-
-Section 7 -- Other Terms and Conditions.
-
- a. The Licensor shall not be bound by any additional or different
- terms or conditions communicated by You unless expressly agreed.
-
- b. Any arrangements, understandings, or agreements regarding the
- Licensed Material not stated herein are separate from and
- independent of the terms and conditions of this Public License.
-
-Section 8 -- Interpretation.
-
- a. For the avoidance of doubt, this Public License does not, and
- shall not be interpreted to, reduce, limit, restrict, or impose
- conditions on any use of the Licensed Material that could lawfully
- be made without permission under this Public License.
-
- b. To the extent possible, if any provision of this Public License is
- deemed unenforceable, it shall be automatically reformed to the
- minimum extent necessary to make it enforceable. If the provision
- cannot be reformed, it shall be severed from this Public License
- without affecting the enforceability of the remaining terms and
- conditions.
-
- c. No term or condition of this Public License will be waived and no
- failure to comply consented to unless expressly agreed to by the
- Licensor.
-
- d. Nothing in this Public License constitutes or may be interpreted
- as a limitation upon, or waiver of, any privileges and immunities
- that apply to the Licensor or You, including from the legal
- processes of any jurisdiction or authority.
-
-=======================================================================
-
-Creative Commons is not a party to its public
-licenses. Notwithstanding, Creative Commons may elect to apply one of
-its public licenses to material it publishes and in those instances
-will be considered the “Licensor.” The text of the Creative Commons
-public licenses is dedicated to the public domain under the CC0 Public
-Domain Dedication. Except for the limited purpose of indicating that
-material is shared under a Creative Commons public license or as
-otherwise permitted by the Creative Commons policies published at
-creativecommons.org/policies, Creative Commons does not authorize the
-use of the trademark "Creative Commons" or any other trademark or logo
-of Creative Commons without its prior written consent including,
-without limitation, in connection with any unauthorized modifications
-to any of its public licenses or any other arrangements,
-understandings, or agreements concerning use of licensed material. For
-the avoidance of doubt, this paragraph does not form part of the
-public licenses.
-
-Creative Commons may be contacted at creativecommons.org.
\ No newline at end of file
diff --git a/simsiam/README.md b/simsiam/README.md
deleted file mode 100644
index 47bab1b..0000000
--- a/simsiam/README.md
+++ /dev/null
@@ -1,96 +0,0 @@
-# SimSiam: Exploring Simple Siamese Representation Learning
-
-
-
-
-
-This is a PyTorch implementation of the [SimSiam paper](https://arxiv.org/abs/2011.10566):
-```
-@Article{chen2020simsiam,
- author = {Xinlei Chen and Kaiming He},
- title = {Exploring Simple Siamese Representation Learning},
- journal = {arXiv preprint arXiv:2011.10566},
- year = {2020},
-}
-```
-
-### Preparation
-
-Install PyTorch and download the ImageNet dataset following the [official PyTorch ImageNet training code](https://github.com/pytorch/examples/tree/master/imagenet). Similar to [MoCo](https://github.com/facebookresearch/moco), the code release contains minimal modifications for both unsupervised pre-training and linear classification to that code.
-
-In addition, install [apex](https://github.com/NVIDIA/apex) for the [LARS](https://github.com/NVIDIA/apex/blob/master/apex/parallel/LARC.py) implementation needed for linear classification.
-
-### Unsupervised Pre-Training
-
-Only **multi-gpu**, **DistributedDataParallel** training is supported; single-gpu or DataParallel training is not supported.
-
-To do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine, run:
-```
-python main_simsiam.py \
- -a resnet50 \
- --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
- --fix-pred-lr \
- [your imagenet-folder with train and val folders]
-```
-The script uses all the default hyper-parameters as described in the paper, and uses the default augmentation recipe from [MoCo v2](https://arxiv.org/abs/2003.04297).
-
-The above command performs pre-training with a non-decaying predictor learning rate for 100 epochs, corresponding to the last row of Table 1 in the paper.
-
-### Linear Classification
-
-With a pre-trained model, to train a supervised linear classifier on frozen features/weights in an 8-gpu machine, run:
-```
-python main_lincls.py \
- -a resnet50 \
- --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
- --pretrained [your checkpoint path]/checkpoint_0099.pth.tar \
- --lars \
- [your imagenet-folder with train and val folders]
-```
-
-The above command uses LARS optimizer and a default batch size of 4096.
-
-### Models and Logs
-
-Our pre-trained ResNet-50 models and logs:
-
-
-
-pre-train epochs |
-batch size |
-pre-train ckpt |
-pre-train log |
-linear cls. ckpt |
-linear cls. log |
-top-1 acc. |
-
-
-| 100 |
-512 |
-link |
-link |
-link |
-link |
-68.1 |
-
-
-| 100 |
-256 |
-link |
-link |
-link |
-link |
-68.3 |
-
-
-
-Settings for the above: 8 NVIDIA V100 GPUs, CUDA 10.1/CuDNN 7.6.5, PyTorch 1.7.0.
-
-### Transferring to Object Detection
-
-Same as [MoCo](https://github.com/facebookresearch/moco) for object detection transfer, please see [moco/detection](https://github.com/facebookresearch/moco/tree/master/detection).
-
-
-### License
-
-This project is under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for details.
\ No newline at end of file
diff --git a/simsiam/__init__.py b/simsiam/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/simsiam/builder.py b/simsiam/builder.py
deleted file mode 100644
index 423af89..0000000
--- a/simsiam/builder.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-
-
-class SimSiam(nn.Module):
- """
- Build a SimSiam model.
- """
-
- def __init__(self, base_encoder, dim=2048, pred_dim=512):
- """
- dim: feature dimension (default: 2048)
- pred_dim: hidden dimension of the predictor (default: 512)
- """
- super(SimSiam, self).__init__()
-
- # create the encoder
- # num_classes is the output fc dimension, zero-initialize last BNs
- self.encoder = base_encoder(num_classes=dim, zero_init_residual=True)
-
- # build a 3-layer projector
- prev_dim = self.encoder.fc.weight.shape[1]
- self.encoder.fc = nn.Sequential(
- nn.Linear(prev_dim, prev_dim, bias=False),
- nn.BatchNorm1d(prev_dim),
- nn.ReLU(inplace=True), # first layer
- nn.Linear(prev_dim, prev_dim, bias=False),
- nn.BatchNorm1d(prev_dim),
- nn.ReLU(inplace=True), # second layer
- self.encoder.fc,
- nn.BatchNorm1d(dim, affine=False),
- ) # output layer
- self.encoder.fc[
- 6
- ].bias.requires_grad = False # hack: not use bias as it is followed by BN
-
- # build a 2-layer predictor
- self.predictor = nn.Sequential(
- nn.Linear(dim, pred_dim, bias=False),
- nn.BatchNorm1d(pred_dim),
- nn.ReLU(inplace=True), # hidden layer
- nn.Linear(pred_dim, dim),
- ) # output layer
-
- def forward(self, x1, x2):
- """
- Input:
- x1: first views of images
- x2: second views of images
- Output:
- p1, p2, z1, z2: predictors and targets of the network
- See Sec. 3 of https://arxiv.org/abs/2011.10566 for detailed notations
- """
-
- # compute features for one view
- z1 = self.encoder(x1) # NxC
- z2 = self.encoder(x2) # NxC
-
- p1 = self.predictor(z1) # NxC
- p2 = self.predictor(z2) # NxC
-
- return p1, p2, z1.detach(), z2.detach()
diff --git a/simsiam/distributed.py b/simsiam/distributed.py
deleted file mode 100644
index 53a555d..0000000
--- a/simsiam/distributed.py
+++ /dev/null
@@ -1,135 +0,0 @@
-"""Utilities for distributed training."""
-import os
-import subprocess
-
-import torch
-import torch.distributed as dist
-
-
-def init_distributed_mode(
- launcher,
- backend,
-) -> None:
- """Launch distributed training based on given launcher and backend.
-
- Parameters
- ----------
- launcher : {'pytorch', 'slurm'}
- Specifies if pytorch launch utitlity (`torchrun`) is being
- used or if running on a SLURM cluster.
- backend : {'nccl', 'gloo', 'mpi'}
- Specifies which backend to use when initializing a process group.
- """
- if launcher == "pytorch":
- launch_pytorch_dist(backend)
- elif launcher == "slurm":
- launch_slurm_dist(backend)
- else:
- raise RuntimeError(
- f"Invalid launcher type: {launcher}. Use 'pytorch' or 'slurm'.",
- )
-
-
-def launch_pytorch_dist(backend) -> None:
- """Initialize a distributed process group with PyTorch.
-
- NOTE: This method relies on `torchrun` to set 'MASTER_ADDR',
- 'MASTER_PORT', 'RANK', 'WORLD_SIZE' and 'LOCAL_RANK' as environment variables
-
- Parameters
- ----------
- backend : {'nccl', 'gloo', 'mpi'}
- Specifies which backend to use when initializing a process group. Can be
- one of ``"nccl"``, ``"gloo"``, or ``"mpi"``.
- """
- local_rank = int(os.environ["LOCAL_RANK"])
- torch.cuda.set_device(local_rank)
- dist.init_process_group(backend=backend, init_method="env://")
- disable_non_master_print() # only print in master process
- dist.barrier()
-
-
-def launch_slurm_dist(backend) -> None:
- """Initialize a distributed process group when using SLURM.
-
- Parameters
- ----------
- backend : {'nccl', 'gloo', 'mpi'}
- Specifies which backend to use when initializing a process group. Can be
- one of ``"nccl"``, ``"gloo"``, or ``"mpi"``.
- """
- # set the MASTER_ADDR, MASTER_PORT, RANK and WORLD_SIZE
- # as environment variables before initializing the process group
- if "MASTER_ADDR" not in os.environ:
- node_list = os.environ["SLURM_NODELIST"]
- os.environ["MASTER_ADDR"] = subprocess.getoutput(
- f"scontrol show hostname {node_list} | head -n1",
- )
- if "MASTER_PORT" not in os.environ:
- os.environ["MASTER_PORT"] = "29400"
- os.environ["RANK"] = os.environ["SLURM_PROCID"]
- os.environ["WORLD_SIZE"] = os.environ["SLURM_NTASKS"]
-
- local_rank = int(os.environ["SLURM_LOCALID"])
- print(f"Initializing distributed training in process {local_rank}")
- torch.cuda.set_device(local_rank)
- dist.init_process_group(backend=backend, init_method="env://")
- disable_non_master_print() # only print on master process
- dist.barrier()
-
-
-# the following functions were adapted from:
-# https://github.com/pytorch/vision/blob/main/references/classification/utils.py
-def disable_non_master_print():
- """Disable printing if not master process.
-
- Notes
- -----
- Printing can be forced by adding a boolean flag, 'force', to the keyword arguments
- to the print function call.
- """
- import builtins as __builtin__
-
- builtin_print = __builtin__.print
-
- def print(*args, **kwargs): # noqa: A001
- force = kwargs.pop("force", False)
- if is_main_process() or force:
- builtin_print(*args, **kwargs)
-
- __builtin__.print = print
-
-
-def is_dist_avail_and_initialized() -> bool:
- """Check if the distributed package is available and initialized."""
- return dist.is_available() and dist.is_initialized()
-
-
-def get_world_size() -> int:
- """Get the total number of processes a distributed process group.
-
- It returns 1 if the PyTorch distributed package is unavailable or the
- default process group has not been initialized.
- """
- if not is_dist_avail_and_initialized():
- return 1
- return dist.get_world_size()
-
-
-def get_rank() -> int:
- """Return the global rank of the current process.
-
- Returns 0 if the PyTorch distribued package is unavailable or the
- default process group has not been initialized.
- """
- if not is_dist_avail_and_initialized():
- return 0
- return dist.get_rank()
-
-
-def is_main_process() -> bool:
- """Check if the current process is the Master proces.
-
- The master process typically has a rank of 0.
- """
- return not is_dist_avail_and_initialized() or get_rank() == 0
diff --git a/simsiam/inatural_dataset.py b/simsiam/inatural_dataset.py
deleted file mode 100644
index 65b689f..0000000
--- a/simsiam/inatural_dataset.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import torch.utils.data as data
-from PIL import Image
-import os
-import json
-from torchvision import transforms
-import random
-import numpy as np
-
-
-def default_loader(path):
- return Image.open(path).convert("RGB")
-
-
-def load_taxonomy(ann_data, tax_levels, classes):
- # loads the taxonomy data and converts to ints
- taxonomy = {}
-
- if "categories" in ann_data.keys():
- num_classes = len(ann_data["categories"])
- for tt in tax_levels:
- tax_data = [aa[tt] for aa in ann_data["categories"]]
- _, tax_id = np.unique(tax_data, return_inverse=True)
- taxonomy[tt] = dict(zip(range(num_classes), list(tax_id)))
- else:
- # set up dummy data
- for tt in tax_levels:
- taxonomy[tt] = dict(zip([0], [0]))
-
- # create a dictionary of lists containing taxonomic labels
- classes_taxonomic = {}
- for cc in np.unique(classes):
- tax_ids = [0] * len(tax_levels)
- for ii, tt in enumerate(tax_levels):
- tax_ids[ii] = taxonomy[tt][cc]
- classes_taxonomic[cc] = tax_ids
-
- return taxonomy, classes_taxonomic
-
-
-class INAT(data.Dataset):
- def __init__(self, root, ann_file, transform):
- # load annotations
- print("Loading annotations from: " + os.path.basename(ann_file))
- with open(ann_file) as data_file:
- ann_data = json.load(data_file)
-
- # set up the filenames and annotations
- self.imgs = [aa["file_name"] for aa in ann_data["images"]]
- self.ids = [aa["id"] for aa in ann_data["images"]]
-
- # if we dont have class labels set them to '0'
- if "annotations" in ann_data.keys():
- self.classes = [aa["category_id"] for aa in ann_data["annotations"]]
- else:
- self.classes = [0] * len(self.imgs)
-
- # print out some stats
- print("\t" + str(len(self.imgs)) + " images")
- print("\t" + str(len(set(self.classes))) + " classes")
-
- self.root = root
- self.loader = default_loader
-
- # augmentation params
- self.transform = transform
-
- def __getitem__(self, index):
- path = self.root + self.imgs[index]
- img = self.loader(path)
- species_id = self.classes[index]
-
- img = self.transform(img)
-
- return img, species_id
-
- def __len__(self):
- return len(self.imgs)
diff --git a/simsiam/linear_eval.py b/simsiam/linear_eval.py
deleted file mode 100644
index e42c097..0000000
--- a/simsiam/linear_eval.py
+++ /dev/null
@@ -1,807 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import builtins
-import math
-import os
-import random
-import shutil
-import time
-import warnings
-from datetime import datetime
-
-import torch
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-import torch.multiprocessing as mp
-import torch.nn as nn
-import torch.nn.parallel
-import torch.optim
-import torch.utils.data
-import torch.utils.data.distributed
-import torchvision.datasets as datasets
-import torchvision.models as models
-import torchvision.transforms as transforms
-from tqdm import tqdm
-from data_generation.icgan.data_utils import utils as data_utils
-
-from inatural_dataset import INAT
-
-
-model_names = sorted(
- name
- for name in models.__dict__
- if name.islower() and not name.startswith("__") and callable(models.__dict__[name])
-)
-
-parser = argparse.ArgumentParser(description="PyTorch ImageNet Training")
-parser.add_argument(
- "--data",
- metavar="DIR",
- default="/scratch/ssd004/datasets/imagenet256",
- help="path to dataset.",
-)
-parser.add_argument(
- "-a",
- "--arch",
- metavar="ARCH",
- default="resnet50",
- choices=model_names,
- help="model architecture: " + " | ".join(model_names) + " (default: resnet50)",
-)
-parser.add_argument(
- "-j",
- "--workers",
- default=4,
- type=int,
- metavar="N",
- help="number of data loading workers (default: 32)",
-)
-parser.add_argument(
- "--epochs", default=90, type=int, metavar="N", help="number of total epochs to run"
-)
-parser.add_argument(
- "-b",
- "--batch-size",
- default=4096,
- type=int,
- metavar="N",
- help="mini-batch size (default: 4096), this is the total "
- "batch size of all GPUs on the current node when "
- "using Data Parallel or Distributed Data Parallel",
-)
-parser.add_argument(
- "--lr",
- "--learning-rate",
- default=0.1,
- type=float,
- metavar="LR",
- help="initial (base) learning rate",
- dest="lr",
-)
-parser.add_argument("--momentum", default=0.9, type=float, metavar="M", help="momentum")
-parser.add_argument(
- "--wd",
- "--weight-decay",
- default=0.0,
- type=float,
- metavar="W",
- help="weight decay (default: 0.)",
- dest="weight_decay",
-)
-parser.add_argument(
- "-p",
- "--print-freq",
- default=10,
- type=int,
- metavar="N",
- help="print frequency (default: 10)",
-)
-parser.add_argument(
- "--resume",
- default="",
- type=str,
- metavar="PATH",
- help="path to latest checkpoint (default: none)",
-)
-parser.add_argument(
- "-e",
- "--evaluate",
- dest="evaluate",
- action="store_true",
- help="evaluate model on validation set",
-)
-parser.add_argument(
- "--world-size",
- default=-1,
- type=int,
- help="number of nodes for distributed training",
-)
-parser.add_argument(
- "--rank", default=-1, type=int, help="node rank for distributed training"
-)
-parser.add_argument(
- "--dist-url",
- default="tcp://224.66.41.62:23456",
- type=str,
- help="url used to set up distributed training",
-)
-parser.add_argument(
- "--dist-backend", default="nccl", type=str, help="distributed backend"
-)
-parser.add_argument(
- "--seed", default=None, type=int, help="seed for initializing training. "
-)
-parser.add_argument("--gpu", default=None, type=int, help="GPU id to use.")
-parser.add_argument(
- "--multiprocessing-distributed",
- action="store_true",
- help="Use multi-processing distributed training to launch "
- "N processes per node, which has N GPUs. This is the "
- "fastest way to use PyTorch for either single node or "
- "multi node data parallel training",
-)
-
-# additional configs:
-parser.add_argument(
- "--pretrained", default="", type=str, help="path to simsiam pretrained checkpoint"
-)
-parser.add_argument("--lars", action="store_true", help="Use LARS")
-
-parser.add_argument("--dataset_name", default="imagenet", help="Name of the dataset.")
-
-parser.add_argument(
- "--checkpoint_dir",
- default="/projects/imagenet_synthetic/model_checkpoints",
- help="Checkpoint root directory.",
-)
-
-parser.add_argument(
- "--num_classes",
- default=1000,
- type=int,
- help="Number of classes in the dataset.",
-)
-
-parser.add_argument(
- "--ablation_mode",
- default="icgan",
- type=str,
- help="Using icgan or stable diffusion feature extractor for ablation study.",
-)
-
-best_acc1 = 0
-
-
-def main():
- args = parser.parse_args()
- current_time = datetime.now().strftime("%Y-%m-%d-%H-%M")
- args.checkpoint_dir = os.path.join(args.checkpoint_dir, f"eval_{current_time}")
- os.makedirs(args.checkpoint_dir, exist_ok=True)
-
- print(args)
-
- if args.seed is not None:
- random.seed(args.seed)
- torch.manual_seed(args.seed)
- # NOTE: this line can reduce speed considerably
- cudnn.deterministic = True
- warnings.warn(
- "You have chosen to seed training. "
- "This will turn on the CUDNN deterministic setting, "
- "which can slow down your training considerably! "
- "You may see unexpected behavior when restarting "
- "from checkpoints."
- )
-
- if args.gpu is not None:
- warnings.warn(
- "You have chosen a specific GPU. This will completely "
- "disable data parallelism."
- )
-
- if args.dist_url == "env://" and args.world_size == -1:
- args.world_size = int(os.environ["WORLD_SIZE"])
- args.distributed = args.world_size > 1 or args.multiprocessing_distributed
-
- ngpus_per_node = torch.cuda.device_count()
- if args.multiprocessing_distributed:
- # Since we have ngpus_per_node processes per node, the total world_size
- # needs to be adjusted accordingly
- args.world_size = ngpus_per_node * args.world_size
- # Use torch.multiprocessing.spawn to launch distributed processes: the
- # main_worker process function
- mp.spawn(
- main_worker,
- nprocs=ngpus_per_node,
- args=(
- ngpus_per_node,
- args,
- ),
- )
- else:
- # Simply call main_worker function
- main_worker(args.gpu, ngpus_per_node, args)
-
-
-def main_worker(gpu, ngpus_per_node, args):
- global best_acc1
- args.gpu = gpu
-
- # suppress printing if not master
- if args.multiprocessing_distributed and args.gpu != 0:
-
- def print_pass(*args, flush=True):
- pass
-
- builtins.print = print_pass
-
- if args.gpu is not None:
- print("Use GPU: {} for training".format(args.gpu), flush=True)
-
- if args.distributed:
- if args.dist_url == "env://" and args.rank == -1:
- args.rank = int(os.environ["RANK"])
- if args.multiprocessing_distributed:
- # For multiprocessing distributed training, rank needs to be the
- # global rank among all the processes
- args.rank = args.rank * ngpus_per_node + gpu
- dist.init_process_group(
- backend=args.dist_backend,
- init_method=args.dist_url,
- world_size=args.world_size,
- rank=args.rank,
- )
- torch.distributed.barrier()
-
- # create model
- print("=> creating model '{}'".format(args.arch), flush=True)
- model = models.__dict__[args.arch]()
-
- model.fc = nn.Linear(2048, args.num_classes)
-
- # freeze all layers but the last fc
- for name, param in model.named_parameters():
- if name not in ["fc.weight", "fc.bias"]:
- param.requires_grad = False
- # init the fc layer
- model.fc.weight.data.normal_(mean=0.0, std=0.01)
- model.fc.bias.data.zero_()
-
- # load from pre-trained, before DistributedDataParallel constructor
- if args.pretrained:
- if os.path.isfile(args.pretrained):
- print("=> loading checkpoint '{}'".format(args.pretrained), flush=True)
- checkpoint = torch.load(args.pretrained, map_location="cpu")
-
- # rename moco pre-trained keys
- state_dict = checkpoint["state_dict"]
- for k in list(state_dict.keys()):
- # retain only encoder up to before the embedding layer
- if k.startswith("module.encoder") and not k.startswith(
- "module.encoder.fc"
- ):
- # remove prefix
- state_dict[k[len("module.encoder.") :]] = state_dict[k]
- # delete renamed or unused k
- del state_dict[k]
-
- args.start_epoch = 0
- msg = model.load_state_dict(state_dict, strict=False)
- assert set(msg.missing_keys) == {"fc.weight", "fc.bias"}
-
- print("=> loaded pre-trained model '{}'".format(args.pretrained))
- else:
- print("=> no checkpoint found at '{}'".format(args.pretrained))
-
- # infer learning rate before changing batch size
- init_lr = args.lr * args.batch_size / 256
-
- if args.distributed:
- # For multiprocessing distributed, DistributedDataParallel constructor
- # should always set the single device scope, otherwise,
- # DistributedDataParallel will use all available devices.
- if args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model.cuda(args.gpu)
- # When using a single GPU per process and per
- # DistributedDataParallel, we need to divide the batch size
- # ourselves based on the total number of GPUs we have
- args.batch_size = int(args.batch_size / ngpus_per_node)
- args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
- model = torch.nn.parallel.DistributedDataParallel(
- model, device_ids=[args.gpu]
- )
- else:
- model.cuda()
- # DistributedDataParallel will divide and allocate batch_size to all
- # available GPUs if device_ids are not set
- model = torch.nn.parallel.DistributedDataParallel(model)
- elif args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model = model.cuda(args.gpu)
- else:
- # DataParallel will divide and allocate batch_size to all available GPUs
- if args.arch.startswith("alexnet") or args.arch.startswith("vgg"):
- model.features = torch.nn.DataParallel(model.features)
- model.cuda()
- else:
- model = torch.nn.DataParallel(model).cuda()
-
- # define loss function (criterion) and optimizer
- criterion = nn.CrossEntropyLoss().cuda(args.gpu)
-
- # optimize only the linear classifier
- parameters = list(filter(lambda p: p.requires_grad, model.parameters()))
- assert len(parameters) == 2 # fc.weight, fc.bias
-
- optimizer = torch.optim.SGD(
- parameters, init_lr, momentum=args.momentum, weight_decay=args.weight_decay
- )
- if args.lars:
- print("=> use LARS optimizer.", flush=True)
- from LARC import LARC
-
- optimizer = LARC(optimizer=optimizer, trust_coefficient=0.001, clip=False)
-
- # optionally resume from a checkpoint
- if args.resume:
- if os.path.isfile(args.resume):
- print("=> loading checkpoint '{}'".format(args.resume), flush=True)
- if args.gpu is None:
- checkpoint = torch.load(args.resume)
- else:
- # Map model to be loaded to specified single gpu.
- loc = "cuda:{}".format(args.gpu)
- checkpoint = torch.load(args.resume, map_location=loc)
- args.start_epoch = checkpoint["epoch"]
- best_acc1 = checkpoint["best_acc1"]
- if args.gpu is not None:
- # best_acc1 may be from a checkpoint from a different GPU
- best_acc1 = best_acc1.to(args.gpu)
- model.load_state_dict(checkpoint["state_dict"])
- optimizer.load_state_dict(checkpoint["optimizer"])
- print(
- "=> loaded checkpoint '{}' (epoch {})".format(
- args.resume, checkpoint["epoch"]
- ),
- flush=True,
- )
- else:
- print("=> no checkpoint found at '{}'".format(args.resume))
-
- cudnn.benchmark = True
-
- # Data loading code
- traindir = os.path.join(args.data, "train")
- valdir = os.path.join(args.data, "val")
- normalize = transforms.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- )
-
- if args.dataset_name == "imagenet":
- train_dataset = datasets.ImageFolder(
- traindir,
- transforms.Compose(
- [
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- normalize,
- ]
- ),
- )
- val_dataset = datasets.ImageFolder(
- valdir,
- transforms.Compose(
- [
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- normalize,
- ]
- ),
- )
- elif args.dataset_name == "food101":
- print("=> using food101 dataset.", flush=True)
- train_dataset = datasets.Food101(
- root=args.data,
- split="train",
- transform=transforms.Compose(
- [
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- normalize,
- ],
- ),
- )
- val_dataset = datasets.Food101(
- root=args.data,
- split="test",
- transform=transforms.Compose(
- [
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- normalize,
- ],
- ),
- )
- elif args.dataset_name == "cifar10":
- train_dataset = datasets.CIFAR10(
- root=args.data,
- train=True,
- download=True,
- transform=transforms.Compose(
- [
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
- ],
- ),
- )
- val_dataset = datasets.CIFAR10(
- root=args.data,
- train=False,
- download=True,
- transform=transforms.Compose(
- [
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
- ],
- ),
- )
- elif args.dataset_name == "cifar100":
- train_dataset = datasets.CIFAR100(
- root=args.data,
- train=True,
- transform=transforms.Compose(
- [
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
- ],
- ),
- )
- val_dataset = datasets.CIFAR100(
- root=args.data,
- train=False,
- transform=transforms.Compose(
- [
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
- ],
- ),
- )
- elif args.dataset_name == "places365":
- train_dataset = datasets.Places365(
- root=args.data,
- split="train-standard",
- transform=transforms.Compose(
- [
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- normalize,
- ],
- ),
- )
- val_dataset = datasets.Places365(
- root=args.data,
- split="val",
- transform=transforms.Compose(
- [
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- normalize,
- ],
- ),
- )
- elif args.dataset_name == "INaturalist":
- train_dataset = INAT(
- root=args.data,
- ann_file=os.path.join(args.data, "train2018.json"),
- transform=transforms.Compose(
- [
- transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- normalize,
- ],
- ),
- )
- val_dataset = INAT(
- root=args.data,
- ann_file=os.path.join(args.data, "val2018.json"),
- transform=transforms.Compose(
- [
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- normalize,
- ],
- ),
- )
-
- if args.distributed:
- train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
- else:
- train_sampler = None
-
- train_loader = torch.utils.data.DataLoader(
- train_dataset,
- batch_size=args.batch_size,
- shuffle=(train_sampler is None),
- num_workers=args.workers,
- pin_memory=True,
- sampler=train_sampler,
- )
-
- val_loader = torch.utils.data.DataLoader(
- val_dataset,
- batch_size=256,
- shuffle=False,
- num_workers=args.workers,
- pin_memory=True,
- )
-
- if args.evaluate:
- validate(val_loader, model, criterion, args)
- return
-
- for epoch in range(args.start_epoch, args.epochs):
- if args.distributed:
- train_sampler.set_epoch(epoch)
- adjust_learning_rate(optimizer, init_lr, epoch, args)
-
- # train for one epoch
- train(train_loader, model, criterion, optimizer, epoch, args)
-
- # evaluate on validation set
- acc1 = validate(val_loader, model, criterion, args)
-
- # remember best acc@1 and save checkpoint
- is_best = acc1 > best_acc1
- best_acc1 = max(acc1, best_acc1)
-
- if not args.multiprocessing_distributed or (
- args.multiprocessing_distributed and args.rank % ngpus_per_node == 0
- ):
- checkpoint_name = "checkpoint_{:04d}.pth.tar".format(epoch + 1)
- checkpoint_file = os.path.join(args.checkpoint_dir, checkpoint_name)
- save_checkpoint(
- {
- "epoch": epoch + 1,
- "arch": args.arch,
- "state_dict": model.state_dict(),
- "best_acc1": best_acc1,
- "optimizer": optimizer.state_dict(),
- },
- is_best,
- filename=checkpoint_file,
- )
- if epoch == args.start_epoch:
- sanity_check(model.state_dict(), args.pretrained)
-
-
-def train(train_loader, model, criterion, optimizer, epoch, args):
- batch_time = AverageMeter("Time", ":6.3f")
- data_time = AverageMeter("Data", ":6.3f")
- losses = AverageMeter("Loss", ":.4e")
- top1 = AverageMeter("Acc@1", ":6.2f")
- top5 = AverageMeter("Acc@5", ":6.2f")
- progress = ProgressMeter(
- len(train_loader),
- [batch_time, data_time, losses, top1, top5],
- prefix="Epoch: [{}]".format(epoch),
- )
-
- """
- Switch to eval mode:
- Under the protocol of linear classification on frozen features/models,
- it is not legitimate to change any part of the pre-trained model.
- BatchNorm in train mode may revise running mean/std (even if it receives
- no gradient), which are part of the model parameters too.
- """
- model.eval()
-
- end = time.time()
- i = 0
- for images, target in tqdm(train_loader):
- # measure data loading time
- data_time.update(time.time() - end)
-
- if args.gpu is not None:
- images = images.cuda(args.gpu, non_blocking=True)
- target = target.cuda(args.gpu, non_blocking=True)
-
- # compute output
- output = model(images)
- loss = criterion(output, target)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, target, topk=(1, 5))
- losses.update(loss.item(), images.size(0))
- top1.update(acc1[0], images.size(0))
- top5.update(acc5[0], images.size(0))
-
- # compute gradient and do SGD step
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- # measure elapsed time
- batch_time.update(time.time() - end)
- end = time.time()
-
- if i % args.print_freq == 0:
- progress.display(i)
-
- i += 1
-
-
-def validate(val_loader, model, criterion, args):
- batch_time = AverageMeter("Time", ":6.3f")
- losses = AverageMeter("Loss", ":.4e")
- top1 = AverageMeter("Acc@1", ":6.2f")
- top5 = AverageMeter("Acc@5", ":6.2f")
- progress = ProgressMeter(
- len(val_loader), [batch_time, losses, top1, top5], prefix="Test: "
- )
-
- # switch to evaluate mode
- model.eval()
-
- with torch.no_grad():
- end = time.time()
- i = 0
- for images, target in tqdm(val_loader):
- if args.gpu is not None:
- images = images.cuda(args.gpu, non_blocking=True)
- target = target.cuda(args.gpu, non_blocking=True)
-
- # compute output
- output = model(images)
- loss = criterion(output, target)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, target, topk=(1, 5))
- losses.update(loss.item(), images.size(0))
- top1.update(acc1[0], images.size(0))
- top5.update(acc5[0], images.size(0))
-
- # measure elapsed time
- batch_time.update(time.time() - end)
- end = time.time()
-
- if i % args.print_freq == 0:
- progress.display(i)
-
- i += 1
-
- # # TODO: this should also be done with the ProgressMeter
- print(
- "\n * Accuracy@1 {top1.avg:.3f} Accuracy@5 {top5.avg:.3f}".format(
- top1=top1, top5=top5
- )
- )
-
- return top1.avg
-
-
-def save_checkpoint(state, is_best, filename="checkpoint.pth.tar"):
- torch.save(state, filename)
- if is_best:
- shutil.copyfile(filename, "model_best.pth.tar")
-
-
-def sanity_check(state_dict, pretrained_weights):
- """
- Linear classifier should not change any weights other than the linear layer.
- This sanity check asserts nothing wrong happens (e.g., BN stats updated).
- """
- print("=> loading '{}' for sanity check".format(pretrained_weights))
- checkpoint = torch.load(pretrained_weights, map_location="cpu")
-
- state_dict_pre = checkpoint["state_dict"]
-
- for k in list(state_dict.keys()):
- # only ignore fc layer
- if "fc.weight" in k or "fc.bias" in k:
- continue
-
- # name in pretrained model
- k_pre = (
- "module.encoder." + k[len("module.") :]
- if k.startswith("module.")
- else "module.encoder." + k
- )
-
- assert (
- state_dict[k].cpu() == state_dict_pre[k_pre]
- ).all(), "{} is changed in linear classifier training.".format(k)
-
- print("=> sanity check passed.")
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self, name, fmt=":f"):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})"
- return fmtstr.format(**self.__dict__)
-
-
-class ProgressMeter(object):
- def __init__(self, num_batches, meters, prefix=""):
- self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
- self.meters = meters
- self.prefix = prefix
-
- def display(self, batch):
- entries = [self.prefix + self.batch_fmtstr.format(batch)]
- entries += [str(meter) for meter in self.meters]
- print("\t".join(entries), flush=True)
-
- def _get_batch_fmtstr(self, num_batches):
- num_digits = len(str(num_batches // 1))
- fmt = "{:" + str(num_digits) + "d}"
- return "[" + fmt + "/" + fmt.format(num_batches) + "]"
-
-
-def adjust_learning_rate(optimizer, init_lr, epoch, args):
- """Decay the learning rate based on schedule"""
- cur_lr = init_lr * 0.5 * (1.0 + math.cos(math.pi * epoch / args.epochs))
- for param_group in optimizer.param_groups:
- param_group["lr"] = cur_lr
-
-
-def accuracy(output, target, topk=(1,)):
- """Computes the accuracy over the k top predictions for the specified values of k"""
- with torch.no_grad():
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-if __name__ == "__main__":
- main()
diff --git a/simsiam/loader.py b/simsiam/loader.py
deleted file mode 100644
index 5b53049..0000000
--- a/simsiam/loader.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import random
-
-import torch
-from PIL import Image, ImageFilter
-from torchvision import datasets, transforms
-
-
-class GaussianBlur(object):
- """Gaussian blur augmentation in SimCLR https://arxiv.org/abs/2002.05709."""
-
- def __init__(self, sigma=[0.1, 2.0]):
- self.sigma = sigma
-
- def __call__(self, x):
- sigma = random.uniform(self.sigma[0], self.sigma[1])
- x = x.filter(ImageFilter.GaussianBlur(radius=sigma))
- return x
-
-
-_normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
-# MoCo v2's aug: similar to SimCLR https://arxiv.org/abs/2002.05709
-_real_augmentations = [
- transforms.RandomResizedCrop(224, scale=(0.2, 1.0)),
- transforms.RandomApply(
- [transforms.ColorJitter(0.4, 0.4, 0.4, 0.1)], # not strengthened
- p=0.8,
- ),
- transforms.RandomGrayscale(p=0.2),
- transforms.RandomApply([GaussianBlur([0.1, 2.0])], p=0.5),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- _normalize,
-]
-
-
-class TwoCropsTransform:
- """Take two random crops of one image as the query and key."""
-
- def __init__(self):
- self.base_transform = transforms.Compose(_real_augmentations)
-
- def __call__(self, x):
- q = self.base_transform(x)
- k = self.base_transform(x)
- return [q, k]
-
-
-class ImageNetSynthetic(datasets.ImageNet):
- def __init__(
- self,
- imagenet_root,
- imagenet_synthetic_root,
- index_min=0,
- index_max=9,
- generative_augmentation_prob=None,
- load_one_real_image=False,
- split="train",
- ):
- super(ImageNetSynthetic, self).__init__(
- root=imagenet_root,
- split=split,
- )
- self.imagenet_root = imagenet_root
- self.imagenet_synthetic_root = imagenet_synthetic_root
- self.index_min = index_min
- self.index_max = index_max
- self.generative_augmentation_prob = generative_augmentation_prob
- self.load_one_real_image = load_one_real_image
- self.real_transforms = transforms.Compose(_real_augmentations)
- # Remove random crop for synthetic image augmentation.
- self.synthetic_transforms = transforms.Compose(_real_augmentations[1:])
- self.split = split
-
- def __getitem__(self, index):
- imagenet_filename, label = self.imgs[index]
-
- def _synthetic_image(filename):
- rand_int = random.randint(self.index_min, self.index_max)
- filename_and_extension = filename.split("/")[-1]
- filename_parent_dir = filename.split("/")[-2]
- image_path = os.path.join(
- self.imagenet_synthetic_root,
- self.split,
- filename_parent_dir,
- filename_and_extension.split(".")[0] + f"_{rand_int}.JPEG",
- )
- return Image.open(image_path).convert("RGB")
-
- if self.generative_augmentation_prob is not None:
- if torch.rand(1) < self.generative_augmentation_prob:
- # Generate a synthetic image.
- image1 = _synthetic_image(imagenet_filename)
- image1 = self.synthetic_transforms(image1)
- else:
- image1 = self.loader(os.path.join(self.root, imagenet_filename))
- image1 = self.real_transforms(image1)
-
- if torch.rand(1) < self.generative_augmentation_prob:
- # Generate another synthetic image.
- image2 = _synthetic_image(imagenet_filename)
- image2 = self.synthetic_transforms(image2)
- else:
- image2 = self.loader(os.path.join(self.root, imagenet_filename))
- image2 = self.real_transforms(image2)
- else:
- if self.load_one_real_image:
- image1 = self.loader(os.path.join(self.root, imagenet_filename))
- image1 = self.real_transforms(image1)
- else:
- image1 = _synthetic_image(imagenet_filename)
- image1 = self.synthetic_transforms(image1)
- # image2 is always synthetic.
- image2 = _synthetic_image(imagenet_filename)
- image2 = self.synthetic_transforms(image2)
-
- return [image1, image2], label
diff --git a/simsiam/temp.py b/simsiam/temp.py
deleted file mode 100644
index e69de29..0000000
diff --git a/simsiam/train_simsiam.py b/simsiam/train_simsiam.py
deleted file mode 100644
index 4896c2e..0000000
--- a/simsiam/train_simsiam.py
+++ /dev/null
@@ -1,438 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import math
-import os
-import random
-from datetime import datetime
-from functools import partial
-
-import torch
-import torch.nn.parallel
-import torch.optim
-import torch.utils.data
-import torch.utils.data.distributed
-from torch import distributed as dist
-from torch import nn
-from torch.backends import cudnn
-from torch.nn.parallel import DistributedDataParallel as DDP # noqa: N817
-from torch.utils.data.distributed import DistributedSampler
-from torchvision import datasets, models
-from tqdm import tqdm
-
-from simsiam import distributed as dist_utils
-from simsiam import builder, loader
-
-
-model_names = sorted(
- name
- for name in models.__dict__
- if name.islower() and not name.startswith("__") and callable(models.__dict__[name])
-)
-
-parser = argparse.ArgumentParser(description="PyTorch ImageNet Training")
-parser.add_argument(
- "--data_dir",
- metavar="DIR",
- default="/scratch/ssd004/datasets/imagenet256",
- help="path to dataset.",
-)
-parser.add_argument(
- "-a",
- "--arch",
- metavar="ARCH",
- default="resnet50",
- choices=model_names,
- help="model architecture: " + " | ".join(model_names) + " (default: resnet50)",
-)
-parser.add_argument(
- "-j",
- "--num_workers",
- default=4,
- type=int,
- metavar="N",
- help="number of data loading workers (default: 32)",
-)
-parser.add_argument(
- "--epochs", default=100, type=int, metavar="N", help="number of total epochs to run"
-)
-parser.add_argument(
- "-b",
- "--batch-size",
- default=256,
- type=int,
- metavar="N",
- help="mini-batch size (default: 512), this is the total "
- "batch size of all GPUs on the current node when "
- "using Data Parallel or Distributed Data Parallel",
-)
-parser.add_argument(
- "--lr",
- "--learning-rate",
- default=0.05,
- type=float,
- metavar="LR",
- help="initial (base) learning rate",
- dest="lr",
-)
-parser.add_argument(
- "--momentum", default=0.9, type=float, metavar="M", help="momentum of SGD solver"
-)
-parser.add_argument(
- "--wd",
- "--weight-decay",
- default=1e-4,
- type=float,
- metavar="W",
- help="weight decay (default: 1e-4)",
- dest="weight_decay",
-)
-parser.add_argument(
- "--resume_from_checkpoint",
- default="",
- type=str,
- help="Path to latest checkpoint.",
-)
-parser.add_argument(
- "--seed", default=42, type=int, help="seed for initializing training. "
-)
-
-# simsiam specific configs:
-parser.add_argument(
- "--dim", default=2048, type=int, help="feature dimension (default: 2048)"
-)
-parser.add_argument(
- "--pred-dim",
- default=512,
- type=int,
- help="hidden dimension of the predictor (default: 512)",
-)
-parser.add_argument(
- "--fix-pred-lr", action="store_true", help="Fix learning rate for the predictor"
-)
-
-parser.add_argument(
- "--distributed_mode",
- action="store_true",
- help="Enable distributed training",
-)
-parser.add_argument("--distributed_launcher", default="slurm")
-parser.add_argument("--distributed_backend", default="nccl")
-parser.add_argument(
- "--checkpoint_dir",
- default="/projects/imagenet_synthetic/model_checkpoints",
- help="Checkpoint root directory.",
-)
-parser.add_argument(
- "--experiment",
- default="",
- help="Experiment name.",
-)
-parser.add_argument(
- "--use_synthetic_data",
- action=argparse.BooleanOptionalAction,
- help="Whether to use real data or synthetic data for training.",
-)
-parser.add_argument(
- "--synthetic_data_dir",
- default="/projects/imagenet_synthetic/",
- help="Path to the root of synthetic data.",
-)
-parser.add_argument(
- "--synthetic_index_min",
- default=0,
- type=int,
- help="Synthetic data files are named filename_i.JPEG. This index determines the lower bound for i.",
-)
-parser.add_argument(
- "--synthetic_index_max",
- default=9,
- type=int,
- help="Synthetic data files are named filename_i.JPEG. This index determines the upper bound for i.",
-)
-parser.add_argument(
- "--generative_augmentation_prob",
- default=None,
- type=float,
- help="The probability of applying a generative model augmentation to a view. Applies to the views separately.",
-)
-parser.add_argument(
- "-p",
- "--print-freq",
- default=10,
- type=int,
- metavar="N",
- help="print frequency (default: 10)",
-)
-
-
-def worker_init_fn(worker_id: int, num_workers: int, rank: int, seed: int) -> None:
- """Initialize worker processes with a random seed.
-
- Parameters
- ----------
- worker_id : int
- ID of the worker process.
- num_workers : int
- Total number of workers that will be initialized.
- rank : int
- The rank of the current process.
- seed : int
- A random seed used determine the worker seed.
- """
- worker_seed = num_workers * rank + worker_id + seed
- torch.manual_seed(worker_seed)
- random.seed(worker_seed)
-
-
-def setup() -> None:
- """Initialize the process group."""
- dist.init_process_group("nccl")
-
-
-def cleanup() -> None:
- """Clean up the process group after training."""
- dist.destroy_process_group()
-
-
-def main():
- args = parser.parse_args()
- current_time = datetime.now().strftime("%Y-%m-%d-%H-%M")
- checkpoint_subdir = (
- f"{args.experiment}_{current_time}" if args.experiment else f"{current_time}"
- )
- args.checkpoint_dir = os.path.join(args.checkpoint_dir, checkpoint_subdir)
- os.makedirs(args.checkpoint_dir, exist_ok=True)
-
- print(args)
-
- # torch.multiprocessing.set_start_method("spawn")
- # torch.multiprocessing.set_start_method("spawn")
- if args.distributed_mode:
- # dist_utils.init_distributed_mode(
- # launcher=args.distributed_launcher,
- # backend=args.distributed_backend,
- # )
- setup()
- torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
- torch.cuda.empty_cache()
- device_id = torch.cuda.current_device()
- else:
- device_id = None
-
- # Data loading.
- if args.use_synthetic_data:
- print(
- f"Using synthetic data for training at {args.synthetic_data_dir} between indices {args.synthetic_index_min} and {args.synthetic_index_max}."
- )
- train_dataset = loader.ImageNetSynthetic(
- args.data_dir,
- args.synthetic_data_dir,
- index_min=args.synthetic_index_min,
- index_max=args.synthetic_index_max,
- generative_augmentation_prob=args.generative_augmentation_prob,
- )
- else:
- print(f"Using real data for training at {args.data_dir}.")
- train_data_dir = os.path.join(args.data_dir, "train")
- train_dataset = datasets.ImageFolder(train_data_dir, loader.TwoCropsTransform())
-
- train_sampler = None
- if dist_utils.is_dist_avail_and_initialized() and args.distributed_mode:
- train_sampler = DistributedSampler(
- train_dataset,
- seed=args.seed,
- drop_last=True,
- )
- init_fn = partial(
- worker_init_fn,
- num_workers=args.num_workers,
- rank=dist_utils.get_rank(),
- seed=args.seed,
- )
-
- train_loader = torch.utils.data.DataLoader(
- train_dataset,
- batch_size=args.batch_size,
- shuffle=(train_sampler is None),
- sampler=train_sampler,
- num_workers=args.num_workers,
- worker_init_fn=init_fn,
- pin_memory=False,
- drop_last=True,
- )
- if dist_utils.get_rank() == 0:
- print(f"Creating model {args.arch}")
- model = builder.SimSiam(models.__dict__[args.arch], args.dim, args.pred_dim)
-
- if args.distributed_mode and dist_utils.is_dist_avail_and_initialized():
- # Apply SyncBN
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
- # set the single device scope, otherwise DistributedDataParallel will
- # use all available devices
- # torch.cuda.set_device(device_id)
- model = model.cuda(device_id)
- model = DDP(model, device_ids=[device_id])
- else:
- raise NotImplementedError("Only DistributedDataParallel is supported.")
- if dist_utils.get_rank() == 0:
- print(model) # print model after SyncBatchNorm
-
- # define loss function (criterion) and optimizer
- criterion = nn.CosineSimilarity(dim=1).cuda(device_id)
-
- if args.fix_pred_lr:
- optim_params = [
- {"params": model.module.encoder.parameters(), "fix_lr": False},
- {"params": model.module.predictor.parameters(), "fix_lr": True},
- ]
- else:
- optim_params = model.parameters()
-
- # infer learning rate before changing batch size
- # init_lr = args.lr * args.batch_size / 256.0
- # TODO Hard-code init-lr to match the original paper with bs=512.
- init_lr = args.lr * 2.0
-
- optimizer = torch.optim.SGD(
- optim_params,
- init_lr,
- momentum=args.momentum,
- weight_decay=args.weight_decay,
- )
-
- start_epoch = 0
- # Optionally resume from a checkpoint
- if args.resume_from_checkpoint:
- if os.path.isfile(args.resume_from_checkpoint):
- print(f"Loading checkpoint: {args.resume_from_checkpoint}")
- checkpoint = torch.load(args.resume_from_checkpoint)
- start_epoch = checkpoint["epoch"] + 1
- model.load_state_dict(checkpoint["state_dict"])
- optimizer.load_state_dict(checkpoint["optimizer"])
- print(f"Loaded checkpoint {args.resume_from_checkpoint} successfully.")
- else:
- raise ValueError(f"No checkpoint found at: {args.resume_from_checkpoint}")
-
- cudnn.benchmark = True
-
- for epoch in range(start_epoch, args.epochs):
- print(f"Starting training epoch: {epoch}")
- if dist_utils.is_dist_avail_and_initialized():
- train_sampler.set_epoch(epoch)
- adjust_learning_rate(optimizer, init_lr, epoch, args)
-
- # train for one epoch
- train(train_loader, model, criterion, optimizer, epoch, device_id, args)
-
- # Checkpointing.
- if dist_utils.get_rank() == 0:
- checkpoint_name = "checkpoint_{:04d}.pth.tar".format(epoch)
- checkpoint_file = os.path.join(args.checkpoint_dir, checkpoint_name)
- save_checkpoint(
- {
- "epoch": epoch,
- "arch": args.arch,
- "state_dict": model.state_dict(),
- "optimizer": optimizer.state_dict(),
- },
- filename=checkpoint_file,
- )
-
-
-def train(train_loader, model, criterion, optimizer, epoch, device_id, args):
- """Single epoch training code."""
- losses = AverageMeter("Loss", ":.4f")
- progress = ProgressMeter(
- len(train_loader),
- [losses],
- prefix="Epoch: [{}]".format(epoch),
- )
-
- # switch to train mode
- model.train()
-
- for i, (images, _) in enumerate(train_loader):
- # for images, _ in tqdm(train_loader):
- images[0] = images[0].cuda(device_id, non_blocking=True)
- images[1] = images[1].cuda(device_id, non_blocking=True)
-
- # compute output and loss
- p1, p2, z1, z2 = model(x1=images[0], x2=images[1])
- loss = -(criterion(p1, z2).mean() + criterion(p2, z1).mean()) * 0.5
-
- losses.update(loss.item(), images[0].size(0))
-
- # compute gradient and do SGD step
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- if i % args.print_freq == 0:
- progress.display(i)
-
-
-def save_checkpoint(state, filename="checkpoint.pth.tar"):
- """Save state dictionary into a model checkpoint."""
- print(f"Saving checkpoint at: {filename}")
- torch.save(state, filename)
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self, name, fmt=":f"):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})"
- return fmtstr.format(**self.__dict__)
-
-
-class ProgressMeter(object):
- def __init__(self, num_batches, meters, prefix=""):
- self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
- self.meters = meters
- self.prefix = prefix
-
- def display(self, batch):
- entries = [self.prefix + self.batch_fmtstr.format(batch)]
- entries += [str(meter) for meter in self.meters]
- print("\t".join(entries))
-
- def _get_batch_fmtstr(self, num_batches):
- num_digits = len(str(num_batches // 1))
- fmt = "{:" + str(num_digits) + "d}"
- return "[" + fmt + "/" + fmt.format(num_batches) + "]"
-
-
-def adjust_learning_rate(optimizer, init_lr, epoch, args):
- """Decay the learning rate based on schedule."""
- cur_lr = init_lr * 0.5 * (1.0 + math.cos(math.pi * epoch / args.epochs))
- for param_group in optimizer.param_groups:
- if "fix_lr" in param_group and param_group["fix_lr"]:
- param_group["lr"] = init_lr
- else:
- param_group["lr"] = cur_lr
-
-
-if __name__ == "__main__":
- main()
diff --git a/solo-learn/.codecov.yml b/solo-learn/.codecov.yml
deleted file mode 100644
index 571a362..0000000
--- a/solo-learn/.codecov.yml
+++ /dev/null
@@ -1,14 +0,0 @@
-comment:
- layout: "flags, files"
- behavior: default
- require_changes: false
- require_base: no
- require_head: no
- show_carryforward_flags: true
-
-flag_management:
- default_rules:
- carryforward: true
-
-coverage:
- range: 40...100 # custom range of coverage colors from red -> yellow -> green
diff --git a/solo-learn/.readthedocs.yml b/solo-learn/.readthedocs.yml
deleted file mode 100644
index b9764ee..0000000
--- a/solo-learn/.readthedocs.yml
+++ /dev/null
@@ -1,18 +0,0 @@
-version: 2
-
-build:
- os: ubuntu-20.04
- tools:
- python: "3.10"
-
-# This part is necessary otherwise the project is not built
-# Optionally set the version of Python and requirements required to build your docs
-python:
- install:
- - requirements: docs/requirements.txt
- - method: setuptools
- path: .
-
-# By default readthedocs does not checkout git submodules
-submodules:
- include: all
diff --git a/solo-learn/README.md b/solo-learn/README.md
deleted file mode 100644
index 9e62280..0000000
--- a/solo-learn/README.md
+++ /dev/null
@@ -1,328 +0,0 @@
-
-
-
-
-
-
-[](https://github.com/vturrisi/solo-learn/actions/workflows/tests.yml)
-[](https://solo-learn.readthedocs.io/en/latest/?badge=latest)
-[](https://codecov.io/gh/vturrisi/solo-learn)
-
-
-
-# solo-learn
-A library of self-supervised methods for unsupervised visual representation learning powered by PyTorch Lightning.
-We aim at providing SOTA self-supervised methods in a comparable environment while, at the same time, implementing training tricks.
-The library is self-contained, but it is possible to use the models outside of solo-learn. **More details in our [paper](#citation)**.
-
----
-
-## News
-* **[Jan 14 2024]**: :clap: Bunch of stability improvements during 2023 :) Also added [All4One](https://openaccess.thecvf.com/content/ICCV2023/html/Estepa_All4One_Symbiotic_Neighbour_Contrastive_Learning_via_Self-Attention_and_Redundancy_Reduction_ICCV_2023_paper.html).
-* **[Jan 07 2023]**: :diving_mask: Added results, checkpoints and configs for MAE on ImageNet. Thanks to [HuangChiEn](https://github.com/HuangChiEn).
-* **[Dec 31 2022]**: :stars: Shiny new logo! Huge thanks to [Luiz](https://www.instagram.com/linhaaspera/)!
-* **[Sep 27 2022]**: :pencil: Brand new config system using OmegaConf/Hydra. Adds more clarity and flexibility. New tutorials will follow soon!
-* **[Aug 04 2022]**: :paintbrush: Added [MAE](https://arxiv.org/abs/2111.06377) and supports finetuning the backbone with `main_linear.py`, mixup, cutmix and [random augment](https://arxiv.org/abs/1909.13719).
-* **[Jul 13 2022]**: :sparkling_heart: Added support for [H5](https://docs.h5py.org/en/stable/index.html) data, improved scripts and data handling.
-* **[Jun 26 2022]**: :fire: Added [MoCo V3](https://arxiv.org/abs/2104.02057).
-* **[Jun 10 2022]**: :bomb: Improved LARS.
-* **[Jun 09 2022]**: :lollipop: Added support for [WideResnet](https://arxiv.org/abs/1605.07146), multicrop for SwAV and equalization data augmentation.
-* **[May 02 2022]**: :diamond_shape_with_a_dot_inside: Wrapped Dali with a DataModule, added auto resume for linear eval and Wandb run resume.
-* **[Apr 12 2022]**: :rainbow: Improved design of models and added support to train with a fraction of data.
-* **[Apr 01 2022]**: :mag: Added the option to use [channel last conversion](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html#converting-existing-models) which considerably decreases training times.
-* **[Feb 04 2022]**: :partying_face: Paper got accepted to JMLR.
-* **[Jan 31 2022]**: :eye: Added ConvNeXt support with timm.
-* **[Dec 20 2021]**: :thermometer: Added ImageNet results, scripts and checkpoints for MoCo V2+.
-* **[Dec 05 2021]**: :notes: Separated [SupCon](https://arxiv.org/abs/2004.11362) from SimCLR and added runs.
-* **[Dec 01 2021]**: :fountain: Added [PoolFormer](https://arxiv.org/abs/2111.11418).
-* **[Nov 29 2021]**: :bangbang: Breaking changes! Update your versions!!!
-* **[Nov 29 2021]**: :book: New tutorials!
-* **[Nov 29 2021]**: :houses: Added offline K-NN and offline UMAP.
-* **[Nov 29 2021]**: :rotating_light: Updated PyTorch and PyTorch Lightning versions. 10% faster.
-* **[Nov 29 2021]**: :beers: Added code of conduct, contribution instructions, issue templates and UMAP tutorial.
-* **[Nov 23 2021]**: :space_invader: Added [VIbCReg](https://arxiv.org/abs/2109.00783).
-* **[Oct 21 2021]**: :triumph: Added support for object recognition via Detectron v2 and auto resume functionally that automatically tries to resume an experiment that crashed/reached a timeout.
-* **[Oct 10 2021]**: :japanese_ogre: Restructured augmentation pipelines to allow more flexibility and multicrop. Also added multicrop for BYOL.
-* **[Sep 27 2021]**: :pizza: Added [NNSiam](https://arxiv.org/abs/2104.14548), [NNBYOL](https://arxiv.org/abs/2104.14548), new tutorials for implementing new methods [1](https://solo-learn.readthedocs.io/en/latest/tutorials/add_new_method.html) and [2](https://solo-learn.readthedocs.io/en/latest/tutorials/add_new_method_momentum.html), more testing and fixed issues with custom data and linear evaluation.
-* **[Sep 19 2021]**: :kangaroo: Added online k-NN evaluation.
-* **[Sep 17 2021]**: :robot: Added [ViT](https://arxiv.org/abs/2010.11929) and [Swin](https://arxiv.org/abs/2103.14030).
-* **[Sep 13 2021]**: :book: Improved [Docs](https://solo-learn.readthedocs.io/en/latest/?badge=latest) and added tutorials for [pretraining](https://solo-learn.readthedocs.io/en/latest/tutorials/overview.html) and [offline linear eval](https://solo-learn.readthedocs.io/en/latest/tutorials/offline_linear_eval.html).
-* **[Aug 13 2021]**: :whale: [DeepCluster V2](https://arxiv.org/abs/2006.09882) is now available.
-
----
-
-## Roadmap and help needed
-* Redoing the documentation to improve clarity.
-* Better and up-to-date tutorials.
-* Add performance-related testing to ensure that methods perform the same across updates.
-* Adding new methods (continuous effort).
-
----
-
-## Methods available
-* [All4One](https://openaccess.thecvf.com/content/ICCV2023/html/Estepa_All4One_Symbiotic_Neighbour_Contrastive_Learning_via_Self-Attention_and_Redundancy_Reduction_ICCV_2023_paper.html)
-* [Barlow Twins](https://arxiv.org/abs/2103.03230)
-* [BYOL](https://arxiv.org/abs/2006.07733)
-* [DeepCluster V2](https://arxiv.org/abs/2006.09882)
-* [DINO](https://arxiv.org/abs/2104.14294)
-* [MAE](https://arxiv.org/abs/2111.06377)
-* [MoCo V2+](https://arxiv.org/abs/2003.04297)
-* [MoCo V3](https://arxiv.org/abs/2104.02057)
-* [NNBYOL](https://arxiv.org/abs/2104.14548)
-* [NNCLR](https://arxiv.org/abs/2104.14548)
-* [NNSiam](https://arxiv.org/abs/2104.14548)
-* [ReSSL](https://arxiv.org/abs/2107.09282)
-* [SimCLR](https://arxiv.org/abs/2002.05709)
-* [SimSiam](https://arxiv.org/abs/2011.10566)
-* [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362)
-* [SwAV](https://arxiv.org/abs/2006.09882)
-* [VIbCReg](https://arxiv.org/abs/2109.00783)
-* [VICReg](https://arxiv.org/abs/2105.04906)
-* [W-MSE](https://arxiv.org/abs/2007.06346)
-
----
-
-## Extra flavor
-
-### Backbones
-* [ResNet](https://arxiv.org/abs/1512.03385)
-* [WideResNet](https://arxiv.org/abs/1605.07146)
-* [ViT](https://arxiv.org/abs/2010.11929)
-* [Swin](https://arxiv.org/abs/2103.14030)
-* [PoolFormer](https://arxiv.org/abs/2111.11418)
-* [ConvNeXt](https://arxiv.org/abs/2201.03545)
-
-### Data
-* Increased data processing speed by up to 100% using [Nvidia Dali](https://github.com/NVIDIA/DALI).
-* Flexible augmentations.
-
-### Evaluation
-* Online linear evaluation via stop-gradient for easier debugging and prototyping (optionally available for the momentum backbone as well).
-* Standard offline linear evaluation.
-* Online and offline K-NN evaluation.
-* Automatic feature space visualization with UMAP.
-
-### Training tricks
-* All the perks of PyTorch Lightning (mixed precision, gradient accumulation, clipping, and much more).
-* Channel last conversion
-* Multi-cropping dataloading following [SwAV](https://arxiv.org/abs/2006.09882):
- * **Note**: currently, only SimCLR, BYOL and SwAV support this.
-* Exclude batchnorm and biases from weight decay and LARS.
-* No LR scheduler for the projection head (as in SimSiam).
-
-### Logging
-* Metric logging on the cloud with [WandB](https://wandb.ai/site)
-* Custom model checkpointing with a simple file organization.
-
----
-## Requirements
-* torch
-* torchvision
-* tqdm
-* einops
-* wandb
-* pytorch-lightning
-* lightning-bolts
-* torchmetrics
-* scipy
-* timm
-
-**Optional**:
-* nvidia-dali
-* matplotlib
-* seaborn
-* pandas
-* umap-learn
-
----
-
-## Installation
-
-First clone the repo.
-
-Then, to install solo-learn with [Dali](https://github.com/NVIDIA/DALI) and/or UMAP support, use:
-```bash
-pip3 install .[dali,umap,h5] --extra-index-url https://developer.download.nvidia.com/compute/redist
-```
-
-If no Dali/UMAP/H5 support is needed, the repository can be installed as:
-```bash
-pip3 install .
-```
-
-For local development:
-```bash
-pip3 install -e .[umap,h5]
-# Make sure you have pre-commit hooks installed
-pre-commit install
-```
-
-**NOTE:** if you are having trouble with dali, install it following their [guide](https://github.com/NVIDIA/DALI).
-
-**NOTE 2:** consider installing [Pillow-SIMD](https://github.com/uploadcare/pillow-simd) for better loading times when not using Dali.
-
-**NOTE 3:** Soon to be on pip.
-
----
-
-## Training
-
-For pretraining the backbone, follow one of the many bash files in `scripts/pretrain/`.
-We are now using [Hydra](https://github.com/facebookresearch/hydra) to handle the config files, so the common syntax is something like:
-```bash
-python3 main_pretrain.py \
- # path to training script folder
- --config-path scripts/pretrain/imagenet-100/ \
- # training config name
- --config-name barlow.yaml
- # add new arguments (e.g. those not defined in the yaml files)
- # by doing ++new_argument=VALUE
- # pytorch lightning's arguments can be added here as well.
-```
-
-After that, for offline linear evaluation, follow the examples in `scripts/linear` or `scripts/finetune` for finetuning the whole backbone.
-
-For k-NN evaluation and UMAP visualization check the scripts in `scripts/{knn,umap}`.
-
-**NOTE:** Files try to be up-to-date and follow as closely as possible the recommended parameters of each paper, but check them before running.
-
----
-
-## Tutorials
-
-Please, check out our [documentation](https://solo-learn.readthedocs.io/en/latest) and tutorials:
-* [Overview](https://solo-learn.readthedocs.io/en/latest/tutorials/overview.html)
-* [Offline linear eval](https://solo-learn.readthedocs.io/en/latest/tutorials/offline_linear_eval.html)
-* [Object detection](https://github.com/vturrisi/solo-learn/blob/main/downstream/object_detection/README.md)
-* [Adding a new method](https://github.com/vturrisi/solo-learn/blob/main/docs/source/tutorials/add_new_method.rst)
-* [Adding a new momentum method](https://github.com/vturrisi/solo-learn/blob/main/docs/source/tutorials/add_new_method_momentum.rst)
-* [Visualizing features with UMAP](https://github.com/vturrisi/solo-learn/blob/main/docs/source/tutorials/umap.rst)
-* [Offline k-NN](https://github.com/vturrisi/solo-learn/blob/main/docs/source/tutorials/knn.rst)
-
-If you want to contribute to solo-learn, make sure you take a look at [how to contribute](https://github.com/vturrisi/solo-learn/blob/main/.github/CONTRIBUTING.md) and follow the [code of conduct](https://github.com/vturrisi/solo-learn/blob/main/.github/CODE_OF_CONDUCT.md)
-
----
-
-## Model Zoo
-
-All pretrained models avaiable can be downloaded directly via the tables below or programmatically by running one of the following scripts
-`zoo/cifar10.sh`, `zoo/cifar100.sh`, `zoo/imagenet100.sh` and `zoo/imagenet.sh`.
-
----
-
-## Results
-
-**Note:** hyperparameters may not be the best, we will be re-running the methods with lower performance eventually.
-
-### CIFAR-10
-
-| Method | Backbone | Epochs | Dali | Acc@1 | Acc@5 | Checkpoint |
-|--------------|:--------:|:------:|:----:|:--------------:|:--------------:|:----------:|
-| All4One | ResNet18 | 1000 | :x: | 93.24 | 99.88 | [:link:](https://drive.google.com/drive/folders/1dtYmZiftruQ7B2PQ8fo44wguCZ0eSzAd?usp=sharing) |
-| Barlow Twins | ResNet18 | 1000 | :x: | 92.10 | 99.73 | [:link:](https://drive.google.com/drive/folders/1L5RAM3lCSViD2zEqLtC-GQKVw6mxtxJ_?usp=sharing) |
-| BYOL | ResNet18 | 1000 | :x: | 92.58 | 99.79 | [:link:](https://drive.google.com/drive/folders/1KxeYAEE7Ev9kdFFhXWkPZhG-ya3_UwGP?usp=sharing) |
-|DeepCluster V2| ResNet18 | 1000 | :x: | 88.85 | 99.58 | [:link:](https://drive.google.com/drive/folders/1tkEbiDQ38vZaQUsT6_vEpxbDxSUAGwF-?usp=sharing) |
-| DINO | ResNet18 | 1000 | :x: | 89.52 | 99.71 | [:link:](https://drive.google.com/drive/folders/1vyqZKUyP8sQyEyf2cqonxlGMbQC-D1Gi?usp=sharing) |
-| MoCo V2+ | ResNet18 | 1000 | :x: | 92.94 | 99.79 | [:link:](https://drive.google.com/drive/folders/1ruNFEB3F-Otxv2Y0p62wrjA4v5Fr2cKC?usp=sharing) |
-| MoCo V3 | ResNet18 | 1000 | :x: | 93.10 | 99.80 | [:link:](https://drive.google.com/drive/folders/1KwZTshNEpmqnYJcmyYPvfIJ_DNwqtAVj?usp=sharing) |
-| NNCLR | ResNet18 | 1000 | :x: | 91.88 | 99.78 | [:link:](https://drive.google.com/drive/folders/1xdCzhvRehPmxinphuiZqFlfBwfwWDcLh?usp=sharing) |
-| ReSSL | ResNet18 | 1000 | :x: | 90.63 | 99.62 | [:link:](https://drive.google.com/drive/folders/1jrFcztY2eO_fG98xPshqOD15pDIhLXp-?usp=sharing) |
-| SimCLR | ResNet18 | 1000 | :x: | 90.74 | 99.75 | [:link:](https://drive.google.com/drive/folders/1mcvWr8P2WNJZ7TVpdLHA_Q91q4VK3y8O?usp=sharing) |
-| Simsiam | ResNet18 | 1000 | :x: | 90.51 | 99.72 | [:link:](https://drive.google.com/drive/folders/1OO_igM3IK5oDw7GjQTNmdfg2I1DH3xOk?usp=sharing) |
-| SupCon | ResNet18 | 1000 | :x: | 93.82 | 99.65 | [:link:](https://drive.google.com/drive/folders/1VwZ9TrJXCpnxyo7P_l397yGrGH-DAUv-?usp=sharing) |
-| SwAV | ResNet18 | 1000 | :x: | 89.17 | 99.68 | [:link:](https://drive.google.com/drive/folders/1nlJH4Ljm8-5fOIeAaKppQT6gtsmmW1T0?usp=sharing) |
-| VIbCReg | ResNet18 | 1000 | :x: | 91.18 | 99.74 | [:link:](https://drive.google.com/drive/folders/1XvxUOnLPZlC_-OkeuO7VqXT7z9_tNVk7?usp=sharing) |
-| VICReg | ResNet18 | 1000 | :x: | 92.07 | 99.74 | [:link:](https://drive.google.com/drive/folders/159ZgCxocB7aaHxwNDubnAWU71zXV9hn-?usp=sharing) |
-| W-MSE | ResNet18 | 1000 | :x: | 88.67 | 99.68 | [:link:](https://drive.google.com/drive/folders/1xPCiULzQ4JCmhrTsbxBp9S2jRZ01KiVM?usp=sharing) |
-
-
-### CIFAR-100
-
-| Method | Backbone | Epochs | Dali | Acc@1 | Acc@5 | Checkpoint |
-|--------------|:--------:|:------:|:----:|:--------------:|:--------------:|:----------:|
-| All4One | ResNet18 | 1000 | :x: | 72.17 | 93.35 | [:link:](https://drive.google.com/drive/folders/1oQcC80XPr-Wxhjs-PEqD_8VhUa_izqeZ?usp=sharing) |
-| Barlow Twins | ResNet18 | 1000 | :x: | 70.90 | 91.91 | [:link:](https://drive.google.com/drive/folders/1hDLSApF3zSMAKco1Ck4DMjyNxhsIR2yq?usp=sharing) |
-| BYOL | ResNet18 | 1000 | :x: | 70.46 | 91.96 | [:link:](https://drive.google.com/drive/folders/1hwsEdsfsUulD2tAwa4epKK9pkSuvFv6m?usp=sharing) |
-|DeepCluster V2| ResNet18 | 1000 | :x: | 63.61 | 88.09 | [:link:](https://drive.google.com/drive/folders/1gAKyMz41mvGh1BBOYdc_xu6JPSkKlWqK?usp=sharing) |
-| DINO | ResNet18 | 1000 | :x: | 66.76 | 90.34 | [:link:](https://drive.google.com/drive/folders/1TxeZi2YLprDDtbt_y5m29t4euroWr1Fy?usp=sharing) |
-| MoCo V2+ | ResNet18 | 1000 | :x: | 69.89 | 91.65 | [:link:](https://drive.google.com/drive/folders/15oWNM16vO6YVYmk_yOmw2XUrFivRXam4?usp=sharing) |
-| MoCo V3 | ResNet18 | 1000 | :x: | 68.83 | 90.57 | [:link:](https://drive.google.com/drive/folders/1Hcf9kMIADKydfxvXLquY9nv7sfNaJ3v6?usp=sharing) |
-| NNCLR | ResNet18 | 1000 | :x: | 69.62 | 91.52 | [:link:](https://drive.google.com/drive/folders/1Dz72o0-5hugYPW1kCCQDBb0Xi3kzMLzu?usp=sharing) |
-| ReSSL | ResNet18 | 1000 | :x: | 65.92 | 89.73 | [:link:](https://drive.google.com/drive/folders/1aVZs9cHAu6Ccz8ILyWkp6NhTsJGBGfjr?usp=sharing) |
-| SimCLR | ResNet18 | 1000 | :x: | 65.78 | 89.04 | [:link:](https://drive.google.com/drive/folders/13pGPcOO9Y3rBoeRVWARgbMFEp8OXxZa0?usp=sharing) |
-| Simsiam | ResNet18 | 1000 | :x: | 66.04 | 89.62 | [:link:](https://drive.google.com/drive/folders/1AJUPmsIHh_nqEcFe-Vcz2o4ruEibFHWO?usp=sharing) |
-| SupCon | ResNet18 | 1000 | :x: | 70.38 | 89.57 | [:link:](https://drive.google.com/drive/folders/15C68oHPDMAOPtmBAm_Xw6YI6GgOW00gM?usp=sharing) |
-| SwAV | ResNet18 | 1000 | :x: | 64.88 | 88.78 | [:link:](https://drive.google.com/drive/folders/1U_bmyhlPEN941hbx0SdRGOT4ivCarQB9?usp=sharing) |
-| VIbCReg | ResNet18 | 1000 | :x: | 67.37 | 90.07 | [:link:](https://drive.google.com/drive/folders/19u3p1maX3xqwoCHNrqSDb98J5fRvd_6v?usp=sharing) |
-| VICReg | ResNet18 | 1000 | :x: | 68.54 | 90.83 | [:link:](https://drive.google.com/drive/folders/1AHmVf_Zl5fikkmR4X3NWlmMOnRzfv0aT?usp=sharing) |
-| W-MSE | ResNet18 | 1000 | :x: | 61.33 | 87.26 | [:link:](https://drive.google.com/drive/folders/1vc9j3RLpVCbECh6o-44oMiE5snNyKPlF?usp=sharing) |
-
-### ImageNet-100
-
-| Method | Backbone | Epochs | Dali | Acc@1 (online) | Acc@1 (offline) | Acc@5 (online) | Acc@5 (offline) | Checkpoint |
-|-------------------------|:--------:|:------:|:------------------:|:--------------:|:---------------:|:--------------:|:---------------:|:----------:|
-| All4One | ResNet18 | 400 | :heavy_check_mark: | 81.93 | - | 96.23 | - | [:link:](https://drive.google.com/drive/folders/1bJCRLP5Rz_JEylNq9C4sY3ccYZSchUGR?usp=sharing) |
-| Barlow Twins :rocket: | ResNet18 | 400 | :heavy_check_mark: | 80.38 | 80.16 | 95.28 | 95.14 | [:link:](https://drive.google.com/drive/folders/1rj8RbER9E71mBlCHIZEIhKPUFn437D5O?usp=sharing) |
-| BYOL :rocket: | ResNet18 | 400 | :heavy_check_mark: | 80.16 | 80.32 | 95.02 | 94.94 | [:link:](https://drive.google.com/drive/folders/1riOLjMawD_znO4HYj8LBN2e1X4jXpDE1?usp=sharing) |
-| DeepCluster V2 | ResNet18 | 400 | :x: | 75.36 | 75.4 | 93.22 | 93.10 | [:link:](https://drive.google.com/drive/folders/1d5jPuavrQ7lMlQZn5m2KnN5sPMGhHFo8?usp=sharing) |
-| DINO | ResNet18 | 400 | :heavy_check_mark: | 74.84 | 74.92 | 92.92 | 92.78 | [:link:](https://drive.google.com/drive/folders/1NtVvRj-tQJvrMxRlMtCJSAecQnYZYkqs?usp=sharing) |
-| DINO :sleepy: | ViT Tiny | 400 | :x: | 63.04 | TODO | 87.72 | TODO | [:link:](https://drive.google.com/drive/folders/16AfsM-UpKky43kdSMlqj4XRe69pRdJLc?usp=sharing) |
-| MoCo V2+ :rocket: | ResNet18 | 400 | :heavy_check_mark: | 78.20 | 79.28 | 95.50 | 95.18 | [:link:](https://drive.google.com/drive/folders/1ItYBtMJ23Yh-Rhrvwjm4w1waFfUGSoKX?usp=sharing) |
-| MoCo V3 :rocket: | ResNet18 | 400 | :heavy_check_mark: | 80.36 | 80.36 | 95.18 | 94.96 | [:link:](https://drive.google.com/drive/folders/15J0JiZsQAsrQler8mbbio-desb_nVoD1?usp=sharing) |
-| MoCo V3 :rocket: | ResNet50 | 400 | :heavy_check_mark: | 85.48 | 84.58 | 96.82 | 96.70 | [:link:](https://drive.google.com/drive/folders/1a1VRXGlP50COZ57DPUA_doBmpaxGKpQE?usp=sharing) |
-| NNCLR :rocket: | ResNet18 | 400 | :heavy_check_mark: | 79.80 | 80.16 | 95.28 | 95.30 | [:link:](https://drive.google.com/drive/folders/1QMkq8w3UsdcZmoNUIUPgfSCAZl_LSNjZ?usp=sharing) |
-| ReSSL | ResNet18 | 400 | :heavy_check_mark: | 76.92 | 78.48 | 94.20 | 94.24 | [:link:](https://drive.google.com/drive/folders/1urWIFACLont4GAduis6l0jcEbl080c9U?usp=sharing) |
-| SimCLR :rocket: | ResNet18 | 400 | :heavy_check_mark: | 77.64 | TODO | 94.06 | TODO | [:link:](https://drive.google.com/drive/folders/1yxAVKnc8Vf0tDfkixSB5mXe7dsA8Ll37?usp=sharing) |
-| Simsiam | ResNet18 | 400 | :heavy_check_mark: | 74.54 | 78.72 | 93.16 | 94.78 | [:link:](https://drive.google.com/drive/folders/1Bc8Xj-Z7ILmspsiEQHyQsTOn4M99F_f5?usp=sharing) |
-| SupCon | ResNet18 | 400 | :heavy_check_mark: | 84.40 | TODO | 95.72 | TODO | [:link:](https://drive.google.com/drive/folders/1BzR0nehkCKpnLhi-oeDynzzUcCYOCUJi?usp=sharing) |
-| SwAV | ResNet18 | 400 | :heavy_check_mark: | 74.04 | 74.28 | 92.70 | 92.84 | [:link:](https://drive.google.com/drive/folders/1VWCMM69sokzjVoPzPSLIsUy5S2Rrm1xJ?usp=sharing) |
-| VIbCReg | ResNet18 | 400 | :heavy_check_mark: | 79.86 | 79.38 | 94.98 | 94.60 | [:link:](https://drive.google.com/drive/folders/1Q06hH18usvRwj2P0bsmoCkjNUX_0syCK?usp=sharing) |
-| VICReg :rocket: | ResNet18 | 400 | :heavy_check_mark: | 79.22 | 79.40 | 95.06 | 95.02 | [:link:](https://drive.google.com/drive/folders/1uWWR5VBUru8vaHaGeLicS6X3R4CfZsr2?usp=sharing) |
-| W-MSE | ResNet18 | 400 | :heavy_check_mark: | 67.60 | 69.06 | 90.94 | 91.22 | [:link:](https://drive.google.com/drive/folders/1TxubagNV4z5Qs7SqbBcyRHWGKevtFO5l?usp=sharing) |
-
-:rocket: methods where hyperparameters were heavily tuned.
-
-:sleepy: ViT is very compute intensive and unstable, so we are slowly running larger architectures and with a larger batch size. Atm, total batch size is 128 and we needed to use float32 precision. If you want to contribute by running it, let us know!
-
-### ImageNet
-
-| Method | Backbone | Epochs | Dali | Acc@1 (online) | Acc@1 (offline) | Acc@5 (online) | Acc@5 (offline) | Checkpoint | Finetuned Checkpoint
-|--------------|:--------:|:------:|:------------------:|:--------------:|:---------------:|:--------------:|:---------------:|:----------:|:----------:|
-| Barlow Twins | ResNet50 | 100 | :heavy_check_mark: | 67.18 | 67.23 | 87.69 | 87.98 | [:link:](https://drive.google.com/drive/folders/1IQUIrCOSduAjUJ31WJ1G5tHDZzWUIEft?usp=sharing) | |
-| BYOL | ResNet50 | 100 | :heavy_check_mark: | 68.63 | 68.37 | 88.80 | 88.66 | [:link:](https://drive.google.com/drive/folders/1-UXo-MttdrqiEQXfV4Duc93fA3mIdsha?usp=sharing) | |
-| MoCo V2+ | ResNet50 | 100 | :heavy_check_mark: | 62.61 | 66.84 | 85.40 | 87.60 | [:link:](https://drive.google.com/drive/folders/1NiBDmieEpNqkwrgn_H7bMnEDVAYc8Sk7?usp=sharing) | |
-| MAE | ViT-B/16 | 100 | :x: | ~ | 81.60 (finetuned) | ~ | 95.50 (finetuned) | [:link:](https://drive.google.com/drive/folders/1OuaXCnQ7WeqyKPxfJibAkXoVTx7S8Hbb) | [:link:](https://drive.google.com/drive/folders/1c9DGhmLsTTtOu2vc9rodqm89wKtp40C5) |
-
-
-
-## Training efficiency for DALI
-
-We report the training efficiency of some methods using a ResNet18 with and without DALI (4 workers per GPU) in a server with an Intel i9-9820X and two RTX2080ti.
-
-| Method | Dali | Total time for 20 epochs | Time for 1 epoch | GPU memory (per GPU) |
-|--------------|:----------------:|:--------------------------:|:--------------------:|:---------------------:|
-| Barlow Twins | :x: | 1h 38m 27s | 4m 55s | 5097 MB |
-| |:heavy_check_mark:| 43m 2s | 2m 10s (56% faster) | 9292 MB |
-| BYOL | :x: | 1h 38m 46s | 4m 56s | 5409 MB |
-| |:heavy_check_mark:| 50m 33s | 2m 31s (49% faster) | 9521 MB |
-| NNCLR | :x: | 1h 38m 30s | 4m 55s | 5060 MB |
-| |:heavy_check_mark:| 42m 3s | 2m 6s (64% faster) | 9244 MB |
-
-**Note**: GPU memory increase doesn't scale with the model, rather it scales with the number of workers.
-
----
-
-## Citation
-If you use solo-learn, please cite our [paper](https://jmlr.org/papers/v23/21-1155.html):
-```bibtex
-@article{JMLR:v23:21-1155,
- author = {Victor Guilherme Turrisi da Costa and Enrico Fini and Moin Nabi and Nicu Sebe and Elisa Ricci},
- title = {solo-learn: A Library of Self-supervised Methods for Visual Representation Learning},
- journal = {Journal of Machine Learning Research},
- year = {2022},
- volume = {23},
- number = {56},
- pages = {1-6},
- url = {http://jmlr.org/papers/v23/21-1155.html}
-}
-```
diff --git a/solo-learn/downstream/object_detection/README.md b/solo-learn/downstream/object_detection/README.md
deleted file mode 100644
index 0081a19..0000000
--- a/solo-learn/downstream/object_detection/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-## Transferring to Detection
-
-The `train_object_detection.py` script reproduces the object detection experiments on Pascal VOC and COCO.
-
-### Instruction
-
-1. Install [detectron2](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md).
-
-1. Convert a pre-trained model to detectron2's format:
- ```
- python3 convert_model_to_detectron2.py --pretrained_feature_extractor PATH_TO_CKPT --output_detectron_model ./detectron_model.pkl
- ```
-
-1. Put dataset under "./datasets" directory,
- following the [directory structure](https://github.com/facebookresearch/detectron2/tree/master/datasets)
- requried by detectron2.
-
-1. Run training:
- ```
- python train_net.py --config-file configs/pascal_voc_R_50_C4_24k_moco.yaml \
- --num-gpus 8 MODEL.WEIGHTS ./detectron_model.pkl
- ```
diff --git a/solo-learn/downstream/object_detection/configs/Base-RCNN-C4-BN.yaml b/solo-learn/downstream/object_detection/configs/Base-RCNN-C4-BN.yaml
deleted file mode 100644
index 5104c6a..0000000
--- a/solo-learn/downstream/object_detection/configs/Base-RCNN-C4-BN.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-MODEL:
- META_ARCHITECTURE: "GeneralizedRCNN"
- RPN:
- PRE_NMS_TOPK_TEST: 6000
- POST_NMS_TOPK_TEST: 1000
- ROI_HEADS:
- NAME: "Res5ROIHeadsExtraNorm"
- BACKBONE:
- FREEZE_AT: 0
- RESNETS:
- NORM: "SyncBN"
-TEST:
- PRECISE_BN:
- ENABLED: True
-SOLVER:
- IMS_PER_BATCH: 16
- BASE_LR: 0.02
diff --git a/solo-learn/downstream/object_detection/configs/coco_R_50_C4_2x.yaml b/solo-learn/downstream/object_detection/configs/coco_R_50_C4_2x.yaml
deleted file mode 100644
index 5b7e424..0000000
--- a/solo-learn/downstream/object_detection/configs/coco_R_50_C4_2x.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-_BASE_: "Base-RCNN-C4-BN.yaml"
-MODEL:
- MASK_ON: True
- WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
-INPUT:
- MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
- MIN_SIZE_TEST: 800
-DATASETS:
- TRAIN: ("coco_2017_train",)
- TEST: ("coco_2017_val",)
-SOLVER:
- STEPS: (120000, 160000)
- MAX_ITER: 180000
diff --git a/solo-learn/downstream/object_detection/configs/coco_R_50_C4_2x_moco.yaml b/solo-learn/downstream/object_detection/configs/coco_R_50_C4_2x_moco.yaml
deleted file mode 100644
index 73ef270..0000000
--- a/solo-learn/downstream/object_detection/configs/coco_R_50_C4_2x_moco.yaml
+++ /dev/null
@@ -1,9 +0,0 @@
-_BASE_: "coco_R_50_C4_2x.yaml"
-MODEL:
- PIXEL_MEAN: [123.675, 116.280, 103.530]
- PIXEL_STD: [58.395, 57.120, 57.375]
- WEIGHTS: "See Instructions"
- RESNETS:
- STRIDE_IN_1X1: False
-INPUT:
- FORMAT: "RGB"
diff --git a/solo-learn/downstream/object_detection/configs/pascal_voc_R_50_C4_24k.yaml b/solo-learn/downstream/object_detection/configs/pascal_voc_R_50_C4_24k.yaml
deleted file mode 100644
index a05eb5e..0000000
--- a/solo-learn/downstream/object_detection/configs/pascal_voc_R_50_C4_24k.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-_BASE_: "Base-RCNN-C4-BN.yaml"
-MODEL:
- MASK_ON: False
- WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
- ROI_HEADS:
- NUM_CLASSES: 20
-INPUT:
- MIN_SIZE_TRAIN: (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800)
- MIN_SIZE_TEST: 800
-DATASETS:
- TRAIN: ('voc_2007_trainval', 'voc_2012_trainval')
- TEST: ('voc_2007_test',)
-SOLVER:
- STEPS: (18000, 22000)
- MAX_ITER: 24000
- WARMUP_ITERS: 100
diff --git a/solo-learn/downstream/object_detection/configs/pascal_voc_R_50_C4_24k_moco.yaml b/solo-learn/downstream/object_detection/configs/pascal_voc_R_50_C4_24k_moco.yaml
deleted file mode 100644
index eebe690..0000000
--- a/solo-learn/downstream/object_detection/configs/pascal_voc_R_50_C4_24k_moco.yaml
+++ /dev/null
@@ -1,9 +0,0 @@
-_BASE_: "pascal_voc_R_50_C4_24k.yaml"
-MODEL:
- PIXEL_MEAN: [123.675, 116.280, 103.530]
- PIXEL_STD: [58.395, 57.120, 57.375]
- WEIGHTS: "See Instructions"
- RESNETS:
- STRIDE_IN_1X1: False
-INPUT:
- FORMAT: "RGB"
diff --git a/solo-learn/downstream/object_detection/convert_model_to_detectron2.py b/solo-learn/downstream/object_detection/convert_model_to_detectron2.py
deleted file mode 100644
index aa977ad..0000000
--- a/solo-learn/downstream/object_detection/convert_model_to_detectron2.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright 2021 solo-learn development team.
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy of
-# this software and associated documentation files (the "Software"), to deal in
-# the Software without restriction, including without limitation the rights to use,
-# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
-# Software, and to permit persons to whom the Software is furnished to do so,
-# subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all copies
-# or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
-# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
-# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
-# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
-# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-# DEALINGS IN THE SOFTWARE.
-
-from argparse import ArgumentParser
-import pickle as pkl
-import torch
-
-if __name__ == "__main__":
- parser = ArgumentParser()
- parser.add_argument("--pretrained_feature_extractor", type=str, required=True)
- parser.add_argument("--output_detectron_model", type=str, required=True)
-
- args = parser.parse_args()
-
- checkpoint = torch.load(args.pretrained_feature_extractor, map_location="cpu")
- checkpoint = checkpoint["state_dict"]
-
- newmodel = {}
- for k, v in checkpoint.items():
- if not k.startswith("backbone"):
- continue
-
- old_k = k
- k = k.replace("backbone.", "")
- if "layer" not in k:
- k = "stem." + k
- for t in [1, 2, 3, 4]:
- k = k.replace(f"layer{t}", f"res{t + 1}")
- for t in [1, 2, 3]:
- k = k.replace(f"bn{t}", f"conv{t}.norm")
- k = k.replace("downsample.0", "shortcut")
- k = k.replace("downsample.1", "shortcut.norm")
- print(old_k, "->", k)
- newmodel[k] = v.numpy()
-
- res = {"model": newmodel, "__author__": "solo-learn", "matching_heuristics": True}
-
- with open(args.output_detectron_model, "wb") as f:
- pkl.dump(res, f)
diff --git a/solo-learn/downstream/object_detection/run.sh b/solo-learn/downstream/object_detection/run.sh
deleted file mode 100644
index a3bc616..0000000
--- a/solo-learn/downstream/object_detection/run.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-export DETECTRON2_DATASETS=/data/datasets
-
-# good results for BYOL
-python3 train_object_detection.py --config-file configs/pascal_voc_R_50_C4_24k_moco.yaml \
- --num-gpus 2 MODEL.WEIGHTS ./detectron_model.pkl SOLVER.IMS_PER_BATCH 16 SOLVER.BASE_LR 0.1
diff --git a/solo-learn/downstream/object_detection/train_object_detection.py b/solo-learn/downstream/object_detection/train_object_detection.py
deleted file mode 100644
index 856955f..0000000
--- a/solo-learn/downstream/object_detection/train_object_detection.py
+++ /dev/null
@@ -1,81 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Copied from https://github.com/facebookresearch/moco/blob/main/detection/train_net.py
-
-import os
-
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.engine import (
- DefaultTrainer,
- default_argument_parser,
- default_setup,
- launch,
-)
-from detectron2.evaluation import COCOEvaluator, PascalVOCDetectionEvaluator
-from detectron2.layers import get_norm
-from detectron2.modeling.roi_heads import ROI_HEADS_REGISTRY, Res5ROIHeads
-
-
-@ROI_HEADS_REGISTRY.register()
-class Res5ROIHeadsExtraNorm(Res5ROIHeads):
- """
- As described in the MOCO paper, there is an extra BN layer
- following the res5 stage.
- """
-
- def _build_res5_block(self, cfg):
- seq, out_channels = super()._build_res5_block(cfg)
- norm = cfg.MODEL.RESNETS.NORM
- norm = get_norm(norm, out_channels)
- seq.add_module("norm", norm)
- return seq, out_channels
-
-
-class Trainer(DefaultTrainer):
- @classmethod
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
- if output_folder is None:
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
- if "coco" in dataset_name:
- return COCOEvaluator(dataset_name, cfg, True, output_folder)
- else:
- assert "voc" in dataset_name
- return PascalVOCDetectionEvaluator(dataset_name)
-
-
-def setup(args):
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- if args.eval_only:
- model = Trainer.build_model(cfg)
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
- res = Trainer.test(cfg, model)
- return res
-
- trainer = Trainer(cfg)
- trainer.resume_or_load(resume=args.resume)
- return trainer.train()
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/solo-learn/main_knn.py b/solo-learn/main_knn.py
deleted file mode 100644
index a8a39a0..0000000
--- a/solo-learn/main_knn.py
+++ /dev/null
@@ -1,192 +0,0 @@
-# Copyright 2023 solo-learn development team.
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy of
-# this software and associated documentation files (the "Software"), to deal in
-# the Software without restriction, including without limitation the rights to use,
-# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
-# Software, and to permit persons to whom the Software is furnished to do so,
-# subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all copies
-# or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
-# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
-# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
-# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
-# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-# DEALINGS IN THE SOFTWARE.
-
-import json
-import os
-from pathlib import Path
-from typing import Tuple
-
-import torch
-import torch.nn as nn
-from omegaconf import OmegaConf
-from torch.utils.data import DataLoader
-from tqdm import tqdm
-
-from solo.args.knn import parse_args_knn
-from solo.data.classification_dataloader import (
- prepare_dataloaders,
- prepare_datasets,
- prepare_transforms,
-)
-from solo.methods import METHODS
-from solo.utils.knn import WeightedKNNClassifier
-
-
-@torch.no_grad()
-def extract_features(loader: DataLoader, model: nn.Module) -> Tuple[torch.Tensor]:
- """Extract features from a data loader using a model.
-
- Args:
- loader (DataLoader): dataloader for a dataset.
- model (nn.Module): torch module used to extract features.
-
- Returns:
- Tuple(torch.Tensor): tuple containing the backbone features, projector features and labels.
- """
-
- model.eval()
- backbone_features, proj_features, labels = [], [], []
- for im, lab in tqdm(loader):
- im = im.cuda(non_blocking=True)
- lab = lab.cuda(non_blocking=True)
- outs = model(im)
- backbone_features.append(outs["feats"].detach())
- proj_features.append(outs["z"])
- labels.append(lab)
- model.train()
- backbone_features = torch.cat(backbone_features)
- proj_features = torch.cat(proj_features)
- labels = torch.cat(labels)
- return backbone_features, proj_features, labels
-
-
-@torch.no_grad()
-def run_knn(
- train_features: torch.Tensor,
- train_targets: torch.Tensor,
- test_features: torch.Tensor,
- test_targets: torch.Tensor,
- k: int,
- T: float,
- distance_fx: str,
-) -> Tuple[float]:
- """Runs offline knn on a train and a test dataset.
-
- Args:
- train_features (torch.Tensor, optional): train features.
- train_targets (torch.Tensor, optional): train targets.
- test_features (torch.Tensor, optional): test features.
- test_targets (torch.Tensor, optional): test targets.
- k (int): number of neighbors.
- T (float): temperature for the exponential. Only used with cosine
- distance.
- distance_fx (str): distance function.
-
- Returns:
- Tuple[float]: tuple containing the the knn acc@1 and acc@5 for the model.
- """
-
- # build knn
- knn = WeightedKNNClassifier(
- k=k,
- T=T,
- distance_fx=distance_fx,
- )
-
- # add features
- knn(
- train_features=train_features,
- train_targets=train_targets,
- test_features=test_features,
- test_targets=test_targets,
- )
-
- # compute
- acc1, acc5 = knn.compute()
-
- # free up memory
- del knn
-
- return acc1, acc5
-
-
-def main():
- args = parse_args_knn()
-
- # build paths
- ckpt_dir = Path(args.pretrained_checkpoint_dir)
- args_path = ckpt_dir / "args.json"
- ckpt_path = [
- ckpt_dir / ckpt for ckpt in os.listdir(ckpt_dir) if ckpt.endswith(".ckpt")
- ][0]
-
- # load arguments
- with open(args_path) as f:
- method_args = json.load(f)
- cfg = OmegaConf.create(method_args)
-
- # build the model
- model = METHODS[method_args["method"]].load_from_checkpoint(
- ckpt_path, strict=False, cfg=cfg
- )
-
- # prepare data
- _, T = prepare_transforms(args.dataset)
- train_dataset, val_dataset = prepare_datasets(
- args.dataset,
- T_train=T,
- T_val=T,
- train_data_path=args.train_data_path,
- val_data_path=args.val_data_path,
- data_format=args.data_format,
- )
- train_loader, val_loader = prepare_dataloaders(
- train_dataset,
- val_dataset,
- batch_size=args.batch_size,
- num_workers=args.num_workers,
- )
-
- # extract train features
- train_features_bb, train_features_proj, train_targets = extract_features(
- train_loader, model
- )
- train_features = {"backbone": train_features_bb, "projector": train_features_proj}
-
- # extract test features
- test_features_bb, test_features_proj, test_targets = extract_features(
- val_loader, model
- )
- test_features = {"backbone": test_features_bb, "projector": test_features_proj}
-
- # run k-nn for all possible combinations of parameters
- for feat_type in args.feature_type:
- print(f"\n### {feat_type.upper()} ###")
- for k in args.k:
- for distance_fx in args.distance_function:
- temperatures = args.temperature if distance_fx == "cosine" else [None]
- for T in temperatures:
- print("---")
- print(
- f"Running k-NN with params: distance_fx={distance_fx}, k={k}, T={T}..."
- )
- acc1, acc5 = run_knn(
- train_features=train_features[feat_type],
- train_targets=train_targets,
- test_features=test_features[feat_type],
- test_targets=test_targets,
- k=k,
- T=T,
- distance_fx=distance_fx,
- )
- print(f"Result: acc@1={acc1}, acc@5={acc5}")
-
-
-if __name__ == "__main__":
- main()
diff --git a/solo-learn/main_linear.py b/solo-learn/main_linear.py
index 9de6d91..97d5f94 100644
--- a/solo-learn/main_linear.py
+++ b/solo-learn/main_linear.py
@@ -64,11 +64,14 @@ def main(cfg: DictConfig):
# remove fc layer
backbone.fc = nn.Identity()
cifar = cfg.data.dataset in ["cifar10", "cifar100"]
- if cifar:
- backbone.conv1 = nn.Conv2d(
- 3, 64, kernel_size=3, stride=1, padding=2, bias=False
- )
- backbone.maxpool = nn.Identity()
+
+ # These lines was present in the original code, but it gave an error
+
+ # if cifar:
+ # backbone.conv1 = nn.Conv2d(
+ # 3, 64, kernel_size=3, stride=1, padding=2, bias=False
+ # )
+ # backbone.maxpool = nn.Identity()
ckpt_path = cfg.pretrained_feature_extractor
assert (
diff --git a/solo-learn/main_umap.py b/solo-learn/main_umap.py
deleted file mode 100644
index 0477178..0000000
--- a/solo-learn/main_umap.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright 2023 solo-learn development team.
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy of
-# this software and associated documentation files (the "Software"), to deal in
-# the Software without restriction, including without limitation the rights to use,
-# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
-# Software, and to permit persons to whom the Software is furnished to do so,
-# subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all copies
-# or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
-# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
-# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
-# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
-# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-# DEALINGS IN THE SOFTWARE.
-
-import json
-import os
-from pathlib import Path
-
-from omegaconf import OmegaConf
-
-from solo.args.umap import parse_args_umap
-from solo.data.classification_dataloader import prepare_data
-from solo.methods import METHODS
-from solo.utils.auto_umap import OfflineUMAP
-
-
-def main():
- args = parse_args_umap()
-
- # build paths
- ckpt_dir = Path(args.pretrained_checkpoint_dir)
- args_path = ckpt_dir / "args.json"
- ckpt_path = [
- ckpt_dir / ckpt for ckpt in os.listdir(ckpt_dir) if ckpt.endswith(".ckpt")
- ][0]
-
- # load arguments
- with open(args_path) as f:
- method_args = json.load(f)
- cfg = OmegaConf.create(method_args)
-
- # build the model
- model = (
- METHODS[method_args["method"]]
- .load_from_checkpoint(ckpt_path, strict=False, cfg=cfg)
- .backbone
- )
- # prepare data
- train_loader, val_loader = prepare_data(
- args.dataset,
- train_data_path=args.train_data_path,
- val_data_path=args.val_data_path,
- data_format=args.data_format,
- batch_size=args.batch_size,
- num_workers=args.num_workers,
- auto_augment=False,
- )
-
- umap = OfflineUMAP()
-
- # move model to the gpu
- device = "cuda:0"
- model = model.to(device)
-
- umap.plot(device, model, train_loader, "im100_train_umap.pdf")
- umap.plot(device, model, val_loader, "im100_val_umap.pdf")
-
-
-if __name__ == "__main__":
- main()
diff --git a/solo-learn/scripts/finetune/imagenet-100/mae.yaml b/solo-learn/scripts/finetune/imagenet-100/mae.yaml
deleted file mode 100644
index a51b725..0000000
--- a/solo-learn/scripts/finetune/imagenet-100/mae.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mae-imagenet100-finetune"
-pretrained_feature_extractor: None
-backbone:
- name: "vit_base"
- kwargs:
- drop_path_rate: 0.1
-pretrain_method: "mae"
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 64
- lr: 5e-4
- weight_decay: 0.05
- layer_decay: 0.75
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0.0
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-label_smoothing: 0.1
-mixup: 0.8
-cutmix: 1.0
-finetune: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0, 1, 2, 3, 4, 5, 6, 7]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/finetune/imagenet-100/wandb/mhug.yaml b/solo-learn/scripts/finetune/imagenet-100/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/finetune/imagenet-100/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/finetune/imagenet-100/wandb/private.yaml b/solo-learn/scripts/finetune/imagenet-100/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/finetune/imagenet-100/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/finetune/imagenet/mae.yaml b/solo-learn/scripts/finetune/imagenet/mae.yaml
deleted file mode 100644
index f3c0453..0000000
--- a/solo-learn/scripts/finetune/imagenet/mae.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mae-imagenet-finetune"
-pretrained_feature_extractor: None
-backbone:
- name: "vit_base"
- kwargs:
- drop_path_rate: 0.1
-pretrain_method: "mae"
-data:
- dataset: "imagenet"
- train_path: "./datasets/imagenet/train"
- val_path: "./datasets/imagenet/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 64
- lr: 5e-4
- weight_decay: 0.05
- layer_decay: 0.75
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0.0
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-label_smoothing: 0.1
-mixup: 0.8
-cutmix: 1.0
-finetune: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0, 1, 2, 3, 4, 5, 6, 7]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/finetune/imagenet/wandb/mhug.yaml b/solo-learn/scripts/finetune/imagenet/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/finetune/imagenet/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/finetune/imagenet/wandb/private.yaml b/solo-learn/scripts/finetune/imagenet/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/finetune/imagenet/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/knn/imagenet-100/knn.sh b/solo-learn/scripts/knn/imagenet-100/knn.sh
deleted file mode 100644
index b742093..0000000
--- a/solo-learn/scripts/knn/imagenet-100/knn.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-python3 main_knn.py \
- --dataset imagenet100 \
- --train_data_path ./datasets/imagenet-100/train \
- --val_data_path ./datasets/imagenet-100/val \
- --batch_size 16 \
- --num_workers 10 \
- --pretrained_checkpoint_dir $1 \
- --k 1 2 5 10 20 50 100 200 \
- --temperature 0.01 0.02 0.05 0.07 0.1 0.2 0.5 1 \
- --feature_type backbone projector \
- --distance_function euclidean cosine
diff --git a/solo-learn/scripts/linear/imagenet-100/barlow.yaml b/solo-learn/scripts/linear/cifar10/barlow.yaml
similarity index 58%
rename from solo-learn/scripts/linear/imagenet-100/barlow.yaml
rename to solo-learn/scripts/linear/cifar10/barlow.yaml
index 534859b..6a199d9 100644
--- a/solo-learn/scripts/linear/imagenet-100/barlow.yaml
+++ b/solo-learn/scripts/linear/cifar10/barlow.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "barlow_twins-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "barlow-cifar10-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
+ name: "resnet50"
pretrain_method: "barlow_twins"
data:
- dataset: imagenet100
- train_path: "/home/CORP/vg.turrisi/Documents/datasets/imagenet-100/train"
- val_path: "/home/CORP/vg.turrisi/Documents/datasets/imagenet-100/val"
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "image_folder"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
+ name: "lars"
+ batch_size: 512
lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/cifar10/barlow_diff.yaml b/solo-learn/scripts/linear/cifar10/barlow_diff.yaml
new file mode 100644
index 0000000..d66c394
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/barlow_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-cifar10-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/barlow_icgan.yaml b/solo-learn/scripts/linear/cifar10/barlow_icgan.yaml
new file mode 100644
index 0000000..849f7ab
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/barlow_icgan.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-cifar10-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/ressl.yaml b/solo-learn/scripts/linear/cifar10/byol.yaml
similarity index 54%
rename from solo-learn/scripts/linear/imagenet-100/ressl.yaml
rename to solo-learn/scripts/linear/cifar10/byol.yaml
index e8e87d8..09ba193 100644
--- a/solo-learn/scripts/linear/imagenet-100/ressl.yaml
+++ b/solo-learn/scripts/linear/cifar10/byol.yaml
@@ -10,35 +10,35 @@ hydra:
run:
dir: .
-name: "ressl-imagenet100-linear"
-pretrained_feature_extractor: None
-backbone:
- name: "resnet18"
-pretrain_method: "ressl"
+name: "byol-cifar10-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "byol"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 3.0
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/cifar10/byol_diff.yaml b/solo-learn/scripts/linear/cifar10/byol_diff.yaml
new file mode 100644
index 0000000..93b7be5
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/byol_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-cifar10-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/byol_icgan.yaml b/solo-learn/scripts/linear/cifar10/byol_icgan.yaml
new file mode 100644
index 0000000..659a100
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/byol_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-cifar10-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/moco.yaml b/solo-learn/scripts/linear/cifar10/moco.yaml
new file mode 100644
index 0000000..a837698
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/moco.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-cifar10-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/moco_diff.yaml b/solo-learn/scripts/linear/cifar10/moco_diff.yaml
new file mode 100644
index 0000000..cdb32f9
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/moco_diff.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-cifar10-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/moco_icgan.yaml b/solo-learn/scripts/linear/cifar10/moco_icgan.yaml
new file mode 100644
index 0000000..55363be
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/moco_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-cifar10-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/simclr.yaml b/solo-learn/scripts/linear/cifar10/simclr.yaml
new file mode 100644
index 0000000..03aa9d8
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/simclr.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-cifar10-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/simclr_diff.yaml b/solo-learn/scripts/linear/cifar10/simclr_diff.yaml
new file mode 100644
index 0000000..12b8766
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/simclr_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-cifar10-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/simclr_icgan.yaml b/solo-learn/scripts/linear/cifar10/simclr_icgan.yaml
new file mode 100644
index 0000000..99ea91f
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/simclr_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-cifar10-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/simsiam.yaml b/solo-learn/scripts/linear/cifar10/simsiam.yaml
new file mode 100644
index 0000000..0f01f08
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/simsiam.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-cifar10-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/simsiam_diff.yaml b/solo-learn/scripts/linear/cifar10/simsiam_diff.yaml
new file mode 100644
index 0000000..0b98d90
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/simsiam_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-cifar10-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar10/simsiam_icgan.yaml b/solo-learn/scripts/linear/cifar10/simsiam_icgan.yaml
new file mode 100644
index 0000000..897c49d
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar10/simsiam_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-cifar10-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: cifar10
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/barlow.yaml b/solo-learn/scripts/linear/cifar100/barlow.yaml
new file mode 100644
index 0000000..34b5e62
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/barlow.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-cifar100-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/barlow_diff.yaml b/solo-learn/scripts/linear/cifar100/barlow_diff.yaml
new file mode 100644
index 0000000..3d1f4a3
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/barlow_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-cifar100-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/barlow_icgan.yaml b/solo-learn/scripts/linear/cifar100/barlow_icgan.yaml
new file mode 100644
index 0000000..41bb6f0
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/barlow_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-cifar100-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/byol.yaml b/solo-learn/scripts/linear/cifar100/byol.yaml
new file mode 100644
index 0000000..fe94ae8
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/byol.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-cifar100-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/byol_diff.yaml b/solo-learn/scripts/linear/cifar100/byol_diff.yaml
new file mode 100644
index 0000000..1a86a64
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/byol_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-cifar100-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/byol_icgan.yaml b/solo-learn/scripts/linear/cifar100/byol_icgan.yaml
new file mode 100644
index 0000000..7ee0604
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/byol_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-cifar100-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/moco.yaml b/solo-learn/scripts/linear/cifar100/moco.yaml
new file mode 100644
index 0000000..29dd632
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/moco.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-cifar100-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/moco_diff.yaml b/solo-learn/scripts/linear/cifar100/moco_diff.yaml
new file mode 100644
index 0000000..867beba
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/moco_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-cifar100-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/moco_icgan.yaml b/solo-learn/scripts/linear/cifar100/moco_icgan.yaml
new file mode 100644
index 0000000..e1f550e
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/moco_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-cifar100-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/simclr.yaml b/solo-learn/scripts/linear/cifar100/simclr.yaml
new file mode 100644
index 0000000..9bad0b4
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/simclr.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-cifar100-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/simclr_diff.yaml b/solo-learn/scripts/linear/cifar100/simclr_diff.yaml
new file mode 100644
index 0000000..1620412
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/simclr_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-cifar100-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/simclr_icgan.yaml b/solo-learn/scripts/linear/cifar100/simclr_icgan.yaml
new file mode 100644
index 0000000..74ca30a
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/simclr_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-cifar100-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/simsiam.yaml b/solo-learn/scripts/linear/cifar100/simsiam.yaml
new file mode 100644
index 0000000..ce86e62
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/simsiam.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-cifar100-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/simsiam_diff.yaml b/solo-learn/scripts/linear/cifar100/simsiam_diff.yaml
new file mode 100644
index 0000000..3de77a2
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/simsiam_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-cifar100-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/cifar100/simsiam_icgan.yaml b/solo-learn/scripts/linear/cifar100/simsiam_icgan.yaml
new file mode 100644
index 0000000..0995d3a
--- /dev/null
+++ b/solo-learn/scripts/linear/cifar100/simsiam_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-cifar100-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: cifar100
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/barlow.yaml b/solo-learn/scripts/linear/food/barlow.yaml
new file mode 100644
index 0000000..d2e575f
--- /dev/null
+++ b/solo-learn/scripts/linear/food/barlow.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-food101-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/barlow_diff.yaml b/solo-learn/scripts/linear/food/barlow_diff.yaml
new file mode 100644
index 0000000..2200847
--- /dev/null
+++ b/solo-learn/scripts/linear/food/barlow_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-food101-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/barlow_icgan.yaml b/solo-learn/scripts/linear/food/barlow_icgan.yaml
new file mode 100644
index 0000000..c6d02e6
--- /dev/null
+++ b/solo-learn/scripts/linear/food/barlow_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-food101-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/byol.yaml b/solo-learn/scripts/linear/food/byol.yaml
new file mode 100644
index 0000000..1543e5c
--- /dev/null
+++ b/solo-learn/scripts/linear/food/byol.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-food101-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/byol_diff.yaml b/solo-learn/scripts/linear/food/byol_diff.yaml
new file mode 100644
index 0000000..614809c
--- /dev/null
+++ b/solo-learn/scripts/linear/food/byol_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-food101-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/byol_icgan.yaml b/solo-learn/scripts/linear/food/byol_icgan.yaml
new file mode 100644
index 0000000..ec08894
--- /dev/null
+++ b/solo-learn/scripts/linear/food/byol_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-food101-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/moco.yaml b/solo-learn/scripts/linear/food/moco.yaml
new file mode 100644
index 0000000..3fbc704
--- /dev/null
+++ b/solo-learn/scripts/linear/food/moco.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-food101-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/moco_diff.yaml b/solo-learn/scripts/linear/food/moco_diff.yaml
new file mode 100644
index 0000000..dadd228
--- /dev/null
+++ b/solo-learn/scripts/linear/food/moco_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-food101-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/moco_icgan.yaml b/solo-learn/scripts/linear/food/moco_icgan.yaml
new file mode 100644
index 0000000..77f8dee
--- /dev/null
+++ b/solo-learn/scripts/linear/food/moco_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-food101-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/simclr.yaml b/solo-learn/scripts/linear/food/simclr.yaml
new file mode 100644
index 0000000..c66bab0
--- /dev/null
+++ b/solo-learn/scripts/linear/food/simclr.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-food101-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/simclr_diff.yaml b/solo-learn/scripts/linear/food/simclr_diff.yaml
new file mode 100644
index 0000000..d6528b8
--- /dev/null
+++ b/solo-learn/scripts/linear/food/simclr_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-food101-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/simclr_icgan.yaml b/solo-learn/scripts/linear/food/simclr_icgan.yaml
new file mode 100644
index 0000000..3550cd7
--- /dev/null
+++ b/solo-learn/scripts/linear/food/simclr_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-food101-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/simsiam.yaml b/solo-learn/scripts/linear/food/simsiam.yaml
new file mode 100644
index 0000000..55769a7
--- /dev/null
+++ b/solo-learn/scripts/linear/food/simsiam.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-food101-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/simsiam_diff.yaml b/solo-learn/scripts/linear/food/simsiam_diff.yaml
new file mode 100644
index 0000000..a224375
--- /dev/null
+++ b/solo-learn/scripts/linear/food/simsiam_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-food101-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/food/simsiam_icgan.yaml b/solo-learn/scripts/linear/food/simsiam_icgan.yaml
new file mode 100644
index 0000000..d4187ed
--- /dev/null
+++ b/solo-learn/scripts/linear/food/simsiam_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-food101-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: food101
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "image_folder"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/deepclusterv2.yaml b/solo-learn/scripts/linear/imagenet-100/deepclusterv2.yaml
deleted file mode 100644
index 4d40619..0000000
--- a/solo-learn/scripts/linear/imagenet-100/deepclusterv2.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "deepclusterv2-imagenet100-linear"
-pretrained_feature_extractor: None
-backbone:
- name: "resnet18"
-pretrain_method: "deepclusterv2"
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.15
- weight_decay: 0
-scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/mocov3_vit.yaml b/solo-learn/scripts/linear/imagenet-100/mocov3_vit.yaml
deleted file mode 100644
index 92a298e..0000000
--- a/solo-learn/scripts/linear/imagenet-100/mocov3_vit.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mocov3-vit-imagenet100-linear"
-pretrained_feature_extractor: None
-backbone:
- name: "vit_small"
-pretrain_method: "mocov3"
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
- weight_decay: 0
-scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/vibcreg.yaml b/solo-learn/scripts/linear/imagenet-100/vibcreg.yaml
deleted file mode 100644
index d4ad39f..0000000
--- a/solo-learn/scripts/linear/imagenet-100/vibcreg.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "vibcreg-imagenet100-linear"
-pretrained_feature_extractor: None
-backbone:
- name: "resnet18"
-pretrain_method: "vibcreg"
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
- weight_decay: 0
-scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/vicreg.yaml b/solo-learn/scripts/linear/imagenet-100/vicreg.yaml
deleted file mode 100644
index 0d0150b..0000000
--- a/solo-learn/scripts/linear/imagenet-100/vicreg.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "vicreg-imagenet100-linear"
-pretrained_feature_extractor: None
-backbone:
- name: "resnet18"
-pretrain_method: "vicreg"
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
- weight_decay: 0
-scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/wandb/mhug.yaml b/solo-learn/scripts/linear/imagenet-100/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/linear/imagenet-100/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/linear/imagenet-100/wandb/private.yaml b/solo-learn/scripts/linear/imagenet-100/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/linear/imagenet-100/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/linear/imagenet/barlow.yaml b/solo-learn/scripts/linear/imagenet/barlow.yaml
index 61d32ab..e7002d0 100644
--- a/solo-learn/scripts/linear/imagenet/barlow.yaml
+++ b/solo-learn/scripts/linear/imagenet/barlow.yaml
@@ -11,34 +11,35 @@ hydra:
dir: .
name: "barlow-imagenet-linear"
-pretrained_feature_extractor: None
+pretrained_feature_extractor: MODEL_PATH
backbone:
name: "resnet50"
-pretrain_method: "barlow"
+pretrain_method: "barlow_twins"
data:
dataset: imagenet
- train_path: "./datasets/imagenet/train"
- val_path: "./datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
- weight_decay: 1e-5
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet-100/dino.yaml b/solo-learn/scripts/linear/imagenet/barlow_diff.yaml
similarity index 57%
rename from solo-learn/scripts/linear/imagenet-100/dino.yaml
rename to solo-learn/scripts/linear/imagenet/barlow_diff.yaml
index edacd28..e7002d0 100644
--- a/solo-learn/scripts/linear/imagenet-100/dino.yaml
+++ b/solo-learn/scripts/linear/imagenet/barlow_diff.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "dino-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "barlow-imagenet-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
-pretrain_method: "dino"
+ name: "resnet50"
+pretrain_method: "barlow_twins"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet-100/swav.yaml b/solo-learn/scripts/linear/imagenet/barlow_icgan.yaml
similarity index 57%
rename from solo-learn/scripts/linear/imagenet-100/swav.yaml
rename to solo-learn/scripts/linear/imagenet/barlow_icgan.yaml
index f0155b5..9eb768f 100644
--- a/solo-learn/scripts/linear/imagenet-100/swav.yaml
+++ b/solo-learn/scripts/linear/imagenet/barlow_icgan.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "swav-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "barlow-imagenet-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
-pretrain_method: "swav"
+ name: "resnet50"
+pretrain_method: "barlow_twins"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.15
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet/byol.yaml b/solo-learn/scripts/linear/imagenet/byol.yaml
index 12aef32..f8de73c 100644
--- a/solo-learn/scripts/linear/imagenet/byol.yaml
+++ b/solo-learn/scripts/linear/imagenet/byol.yaml
@@ -11,34 +11,35 @@ hydra:
dir: .
name: "byol-imagenet-linear"
-pretrained_feature_extractor: None
+pretrained_feature_extractor: MODEL_PATH
backbone:
name: "resnet50"
pretrain_method: "byol"
data:
dataset: imagenet
- train_path: "./datasets/imagenet/train"
- val_path: "./datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
+ name: "lars"
+ batch_size: 512
lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet-100/nnclr.yaml b/solo-learn/scripts/linear/imagenet/byol_diff.yaml
similarity index 57%
rename from solo-learn/scripts/linear/imagenet-100/nnclr.yaml
rename to solo-learn/scripts/linear/imagenet/byol_diff.yaml
index ac197f9..d25e950 100644
--- a/solo-learn/scripts/linear/imagenet-100/nnclr.yaml
+++ b/solo-learn/scripts/linear/imagenet/byol_diff.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "nnclr-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "byol-imagenet-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
-pretrain_method: "nnclr"
+ name: "resnet50"
+pretrain_method: "byol"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet/byol_icgan.yaml b/solo-learn/scripts/linear/imagenet/byol_icgan.yaml
new file mode 100644
index 0000000..ef0b980
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/byol_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-imagenet-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet/moco.yaml b/solo-learn/scripts/linear/imagenet/moco.yaml
new file mode 100644
index 0000000..5e81863
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/moco.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-imagenet-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/mocov2plus.yaml b/solo-learn/scripts/linear/imagenet/moco_diff.yaml
similarity index 60%
rename from solo-learn/scripts/linear/imagenet-100/mocov2plus.yaml
rename to solo-learn/scripts/linear/imagenet/moco_diff.yaml
index 55d15a0..72aaa80 100644
--- a/solo-learn/scripts/linear/imagenet-100/mocov2plus.yaml
+++ b/solo-learn/scripts/linear/imagenet/moco_diff.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "mocov2plus-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "moco-imagenet-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
+ name: "resnet50"
pretrain_method: "mocov2plus"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 3.0
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet/moco_icgan.yaml b/solo-learn/scripts/linear/imagenet/moco_icgan.yaml
new file mode 100644
index 0000000..fb15d6b
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/moco_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-imagenet-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet/mocov2plus.yaml b/solo-learn/scripts/linear/imagenet/mocov2plus.yaml
deleted file mode 100644
index 5840a77..0000000
--- a/solo-learn/scripts/linear/imagenet/mocov2plus.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mocov2plus-imagenet-linear"
-pretrained_feature_extractor: "trained_models/mocov2plus/gjf2upj4/mocov2plus-imagenet-gjf2upj4-ep=99.ckpt"
-backbone:
- name: "resnet50"
-pretrain_method: "mocov2plus"
-data:
- dataset: imagenet
- train_path: "./datasets/imagenet/train"
- val_path: "./datasets/imagenet/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 3.0
- weight_decay: 0
-scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
-checkpoint:
- enabled: True
- dir: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/solo_trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 100
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/simclr.yaml b/solo-learn/scripts/linear/imagenet/simclr.yaml
similarity index 60%
rename from solo-learn/scripts/linear/imagenet-100/simclr.yaml
rename to solo-learn/scripts/linear/imagenet/simclr.yaml
index 04e312f..6e3bc09 100644
--- a/solo-learn/scripts/linear/imagenet-100/simclr.yaml
+++ b/solo-learn/scripts/linear/imagenet/simclr.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "simclr-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "simclr-imagenet-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
+ name: "resnet50"
pretrain_method: "simclr"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 1.0
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet/simclr_diff.yaml b/solo-learn/scripts/linear/imagenet/simclr_diff.yaml
new file mode 100644
index 0000000..449edd1
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/simclr_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-imagenet-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet/simclr_icgan.yaml b/solo-learn/scripts/linear/imagenet/simclr_icgan.yaml
new file mode 100644
index 0000000..dc67597
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/simclr_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-imagenet-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet/simsiam.yaml b/solo-learn/scripts/linear/imagenet/simsiam.yaml
index 44206bf..79eb529 100644
--- a/solo-learn/scripts/linear/imagenet/simsiam.yaml
+++ b/solo-learn/scripts/linear/imagenet/simsiam.yaml
@@ -10,31 +10,32 @@ hydra:
run:
dir: .
-name: "simsiam-linear"
-pretrained_feature_extractor: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/trained_models/simsiam/5/simsiam-imagenet-5-ep=99.ckpt"
+name: "simsiam-imagenet-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
name: "resnet50"
pretrain_method: "simsiam"
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 30.0
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/solo_trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: False
+ enabled: True
# overwrite PL stuff
max_epochs: 100
diff --git a/solo-learn/scripts/linear/imagenet/simsiam_diff.yaml b/solo-learn/scripts/linear/imagenet/simsiam_diff.yaml
new file mode 100644
index 0000000..a3b7d56
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/simsiam_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-imagenet-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet/simsiam_icgan.yaml b/solo-learn/scripts/linear/imagenet/simsiam_icgan.yaml
new file mode 100644
index 0000000..e060243
--- /dev/null
+++ b/solo-learn/scripts/linear/imagenet/simsiam_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-imagenet-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/barlow.yaml b/solo-learn/scripts/linear/inaturalist/barlow.yaml
new file mode 100644
index 0000000..38fb46d
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/barlow.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-inaturalist-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/barlow_diff.yaml b/solo-learn/scripts/linear/inaturalist/barlow_diff.yaml
new file mode 100644
index 0000000..38fb46d
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/barlow_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-inaturalist-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/mocov3.yaml b/solo-learn/scripts/linear/inaturalist/barlow_icgan.yaml
similarity index 57%
rename from solo-learn/scripts/linear/imagenet-100/mocov3.yaml
rename to solo-learn/scripts/linear/inaturalist/barlow_icgan.yaml
index 30beaf1..8f0253d 100644
--- a/solo-learn/scripts/linear/imagenet-100/mocov3.yaml
+++ b/solo-learn/scripts/linear/inaturalist/barlow_icgan.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "mocov3-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "barlow-inaturalist-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
-pretrain_method: "mocov3"
+ name: "resnet50"
+pretrain_method: "barlow_twins"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/imagenet-100/byol.yaml b/solo-learn/scripts/linear/inaturalist/byol.yaml
similarity index 60%
rename from solo-learn/scripts/linear/imagenet-100/byol.yaml
rename to solo-learn/scripts/linear/inaturalist/byol.yaml
index e167722..dcca72a 100644
--- a/solo-learn/scripts/linear/imagenet-100/byol.yaml
+++ b/solo-learn/scripts/linear/inaturalist/byol.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "byol-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "byol-inaturalist-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
+ name: "resnet50"
pretrain_method: "byol"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.3
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/inaturalist/byol_diff.yaml b/solo-learn/scripts/linear/inaturalist/byol_diff.yaml
new file mode 100644
index 0000000..1fb3c01
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/byol_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-inaturalist-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/byol_icgan.yaml b/solo-learn/scripts/linear/inaturalist/byol_icgan.yaml
new file mode 100644
index 0000000..53f169b
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/byol_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-inaturalist-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/moco.yaml b/solo-learn/scripts/linear/inaturalist/moco.yaml
new file mode 100644
index 0000000..97a3239
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/moco.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-inaturalist-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/moco_diff.yaml b/solo-learn/scripts/linear/inaturalist/moco_diff.yaml
new file mode 100644
index 0000000..936afbf
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/moco_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-inaturalist-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/moco_icgan.yaml b/solo-learn/scripts/linear/inaturalist/moco_icgan.yaml
new file mode 100644
index 0000000..5e139c3
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/moco_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-inaturalist-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/simclr.yaml b/solo-learn/scripts/linear/inaturalist/simclr.yaml
new file mode 100644
index 0000000..8f6862d
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/simclr.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-inaturalist-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/simclr_diff.yaml b/solo-learn/scripts/linear/inaturalist/simclr_diff.yaml
new file mode 100644
index 0000000..7abb55b
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/simclr_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-inaturalist-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/simclr_icgan.yaml b/solo-learn/scripts/linear/inaturalist/simclr_icgan.yaml
new file mode 100644
index 0000000..9c72052
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/simclr_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-inaturalist-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/imagenet-100/simsiam.yaml b/solo-learn/scripts/linear/inaturalist/simsiam.yaml
similarity index 60%
rename from solo-learn/scripts/linear/imagenet-100/simsiam.yaml
rename to solo-learn/scripts/linear/inaturalist/simsiam.yaml
index b7d9dda..3ebd8db 100644
--- a/solo-learn/scripts/linear/imagenet-100/simsiam.yaml
+++ b/solo-learn/scripts/linear/inaturalist/simsiam.yaml
@@ -10,35 +10,36 @@ hydra:
run:
dir: .
-name: "simsiam-imagenet100-linear"
-pretrained_feature_extractor: None
+name: "simsiam-inaturalist-linear"
+pretrained_feature_extractor: MODEL_PATH
backbone:
- name: "resnet18"
+ name: "resnet50"
pretrain_method: "simsiam"
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
- name: "sgd"
- batch_size: 256
- lr: 30.0
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
weight_decay: 0
scheduler:
- name: "step"
- lr_decay_steps: [60, 80]
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
max_epochs: 100
-devices: [0]
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/linear/inaturalist/simsiam_diff.yaml b/solo-learn/scripts/linear/inaturalist/simsiam_diff.yaml
new file mode 100644
index 0000000..121f20e
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/simsiam_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-inaturalist-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/inaturalist/simsiam_icgan.yaml b/solo-learn/scripts/linear/inaturalist/simsiam_icgan.yaml
new file mode 100644
index 0000000..7ede227
--- /dev/null
+++ b/solo-learn/scripts/linear/inaturalist/simsiam_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-inaturalist-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: inaturalist
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/barlow.yaml b/solo-learn/scripts/linear/places/barlow.yaml
new file mode 100644
index 0000000..ccba6d6
--- /dev/null
+++ b/solo-learn/scripts/linear/places/barlow.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-places-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/barlow_diff.yaml b/solo-learn/scripts/linear/places/barlow_diff.yaml
new file mode 100644
index 0000000..3b819f0
--- /dev/null
+++ b/solo-learn/scripts/linear/places/barlow_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-places-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/barlow_icgan.yaml b/solo-learn/scripts/linear/places/barlow_icgan.yaml
new file mode 100644
index 0000000..ae67213
--- /dev/null
+++ b/solo-learn/scripts/linear/places/barlow_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "barlow-places-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "barlow_twins"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/byol.yaml b/solo-learn/scripts/linear/places/byol.yaml
new file mode 100644
index 0000000..89faf1d
--- /dev/null
+++ b/solo-learn/scripts/linear/places/byol.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-places-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/byol_diff.yaml b/solo-learn/scripts/linear/places/byol_diff.yaml
new file mode 100644
index 0000000..0af3f6a
--- /dev/null
+++ b/solo-learn/scripts/linear/places/byol_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-places-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/byol_icgan.yaml b/solo-learn/scripts/linear/places/byol_icgan.yaml
new file mode 100644
index 0000000..3f3829f
--- /dev/null
+++ b/solo-learn/scripts/linear/places/byol_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "byol-places-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "byol"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/moco.yaml b/solo-learn/scripts/linear/places/moco.yaml
new file mode 100644
index 0000000..e02643e
--- /dev/null
+++ b/solo-learn/scripts/linear/places/moco.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-places-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/moco_diff.yaml b/solo-learn/scripts/linear/places/moco_diff.yaml
new file mode 100644
index 0000000..a2e1596
--- /dev/null
+++ b/solo-learn/scripts/linear/places/moco_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-places-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/moco_icgan.yaml b/solo-learn/scripts/linear/places/moco_icgan.yaml
new file mode 100644
index 0000000..675e59f
--- /dev/null
+++ b/solo-learn/scripts/linear/places/moco_icgan.yaml
@@ -0,0 +1,45 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "moco-places-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+ name: "resnet50"
+pretrain_method: "mocov2plus"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/simclr.yaml b/solo-learn/scripts/linear/places/simclr.yaml
new file mode 100644
index 0000000..1c010d4
--- /dev/null
+++ b/solo-learn/scripts/linear/places/simclr.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-places-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/simclr_diff.yaml b/solo-learn/scripts/linear/places/simclr_diff.yaml
new file mode 100644
index 0000000..0352a35
--- /dev/null
+++ b/solo-learn/scripts/linear/places/simclr_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-places-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/simclr_icgan.yaml b/solo-learn/scripts/linear/places/simclr_icgan.yaml
new file mode 100644
index 0000000..939dd73
--- /dev/null
+++ b/solo-learn/scripts/linear/places/simclr_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-places-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simclr"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/simsiam.yaml b/solo-learn/scripts/linear/places/simsiam.yaml
new file mode 100644
index 0000000..1cf2301
--- /dev/null
+++ b/solo-learn/scripts/linear/places/simsiam.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-places-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/simsiam_diff.yaml b/solo-learn/scripts/linear/places/simsiam_diff.yaml
new file mode 100644
index 0000000..789d03a
--- /dev/null
+++ b/solo-learn/scripts/linear/places/simsiam_diff.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-places-diff-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/linear/places/simsiam_icgan.yaml b/solo-learn/scripts/linear/places/simsiam_icgan.yaml
new file mode 100644
index 0000000..ad79c48
--- /dev/null
+++ b/solo-learn/scripts/linear/places/simsiam_icgan.yaml
@@ -0,0 +1,46 @@
+defaults:
+ - _self_
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simsiam-places-icgan-linear"
+pretrained_feature_extractor: MODEL_PATH
+backbone:
+ name: "resnet50"
+pretrain_method: "simsiam"
+data:
+ dataset: places365
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 512
+ lr: 0.1
+ weight_decay: 0
+scheduler:
+ name: "warmup_cosine"
+ warmup_epochs: 0
+ scheduler_interval: "epoch"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 45
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16
diff --git a/solo-learn/scripts/pretrain/cifar-multicrop/augmentations/swav.yaml b/solo-learn/scripts/pretrain/cifar-multicrop/augmentations/swav.yaml
deleted file mode 100644
index 95baaf3..0000000
--- a/solo-learn/scripts/pretrain/cifar-multicrop/augmentations/swav.yaml
+++ /dev/null
@@ -1,68 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.6
- contrast: 0.6
- saturation: 0.6
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.6
- contrast: 0.6
- saturation: 0.6
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.2
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 0.5
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.1
- solarization:
- prob: 0.1
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 18
- num_crops: 6
diff --git a/solo-learn/scripts/pretrain/cifar-multicrop/swav.yaml b/solo-learn/scripts/pretrain/cifar-multicrop/swav.yaml
deleted file mode 100644
index c36b766..0000000
--- a/solo-learn/scripts/pretrain/cifar-multicrop/swav.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-defaults:
- - _self_
- - augmentations: swav.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "swav-cifar10-multicrop" # change here for cifar100
-method: "swav"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- queue_size: 3840
- proj_output_dim: 128
- num_prototypes: 3000
- epoch_queue_starts: 50
- freeze_prototypes_epochs: 2
- temperature: 0.1
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.6
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
- min_lr: 0.0006
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar-multicrop/wandb/mhug.yaml b/solo-learn/scripts/pretrain/cifar-multicrop/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/pretrain/cifar-multicrop/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/cifar-multicrop/wandb/private.yaml b/solo-learn/scripts/pretrain/cifar-multicrop/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/pretrain/cifar-multicrop/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/cifar/all4one.yaml b/solo-learn/scripts/pretrain/cifar/all4one.yaml
deleted file mode 100644
index 7db7ea7..0000000
--- a/solo-learn/scripts/pretrain/cifar/all4one.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "All4One-cifar100" # change here for cifar10
-method: "all4one"
-backbone:
- name: "resnet18"
-method_kwargs:
- temperature: 0.2
- proj_hidden_dim: 2048
- pred_hidden_dim: 4096
- proj_output_dim: 256
- queue_size: 98304
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: cifar100 # change here for cifar10
- train_path: "./datasets/"
- val_path: "./datasets/"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 1.0
- classifier_lr: 0.1
- weight_decay: 1e-5
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: False
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/augmentations/asymmetric.yaml b/solo-learn/scripts/pretrain/cifar/augmentations/asymmetric.yaml
deleted file mode 100644
index 8eb4ebc..0000000
--- a/solo-learn/scripts/pretrain/cifar/augmentations/asymmetric.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.2
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
diff --git a/solo-learn/scripts/pretrain/cifar/augmentations/reconstruction.yaml b/solo-learn/scripts/pretrain/cifar/augmentations/reconstruction.yaml
deleted file mode 100644
index 56e7549..0000000
--- a/solo-learn/scripts/pretrain/cifar/augmentations/reconstruction.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.0
- grayscale:
- prob: 0.0
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
diff --git a/solo-learn/scripts/pretrain/cifar/augmentations/ressl.yaml b/solo-learn/scripts/pretrain/cifar/augmentations/ressl.yaml
deleted file mode 100644
index 7f6dc2c..0000000
--- a/solo-learn/scripts/pretrain/cifar/augmentations/ressl.yaml
+++ /dev/null
@@ -1,41 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.0
- grayscale:
- prob: 0.0
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 1
diff --git a/solo-learn/scripts/pretrain/cifar/augmentations/symmetric.yaml b/solo-learn/scripts/pretrain/cifar/augmentations/symmetric.yaml
deleted file mode 100644
index 24078d9..0000000
--- a/solo-learn/scripts/pretrain/cifar/augmentations/symmetric.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 2
diff --git a/solo-learn/scripts/pretrain/cifar/augmentations/symmetric_weak.yaml b/solo-learn/scripts/pretrain/cifar/augmentations/symmetric_weak.yaml
deleted file mode 100644
index 8ce8159..0000000
--- a/solo-learn/scripts/pretrain/cifar/augmentations/symmetric_weak.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.4
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 2
diff --git a/solo-learn/scripts/pretrain/cifar/barlow.yaml b/solo-learn/scripts/pretrain/cifar/barlow.yaml
deleted file mode 100644
index 728f14b..0000000
--- a/solo-learn/scripts/pretrain/cifar/barlow.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "barlow_twins-cifar10" # change here for cifar100
-method: "barlow_twins"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- scale_loss: 0.1
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/deepclusterv2.yaml b/solo-learn/scripts/pretrain/cifar/deepclusterv2.yaml
deleted file mode 100644
index f884785..0000000
--- a/solo-learn/scripts/pretrain/cifar/deepclusterv2.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "deepclusterv2-cifar10" # change here for cifar100
-method: "deepclusterv2"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 128
- num_prototypes: [3000, 3000, 3000]
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.6
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
- min_lr: 0.0006
- warmup_start_lr: 0.0
- warmup_epochs: 11
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/dino.yaml b/solo-learn/scripts/pretrain/cifar/dino.yaml
deleted file mode 100644
index 008e3ab..0000000
--- a/solo-learn/scripts/pretrain/cifar/dino.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "dino-cifar10" # change here for cifar100
-method: "dino"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 256
- num_prototypes: 4096
-momentum:
- base_tau: 0.9995
- final_tau: 1.0
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/mae.yaml b/solo-learn/scripts/pretrain/cifar/mae.yaml
deleted file mode 100644
index 0d8f8ba..0000000
--- a/solo-learn/scripts/pretrain/cifar/mae.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-defaults:
- - _self_
- - augmentations: reconstruction.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mae-cifar10" # change here for cifar100
-method: "mae"
-backbone:
- name: "vit_small"
- kwargs:
- patch_size: 4
- img_size: 32
-method_kwargs:
- decoder_embed_dim: 512
- decoder_depth: 8
- decoder_num_heads: 16
- mask_ratio: 0.75
- norm_pix_loss: True
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 256
- lr: 2.0e-4
- classifier_lr: 2.0e-4
- weight_decay: 0.05
- kwargs:
- betas: [0.9, 0.95]
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/nnclr.yaml b/solo-learn/scripts/pretrain/cifar/nnclr.yaml
deleted file mode 100644
index 2786f36..0000000
--- a/solo-learn/scripts/pretrain/cifar/nnclr.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "nnclr-cifar10" # change here for cifar100
-method: "nnclr"
-backbone:
- name: "resnet18"
-method_kwargs:
- temperature: 0.2
- proj_hidden_dim: 2048
- pred_hidden_dim: 4096
- proj_output_dim: 256
- queue_size: 65536
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.4
- classifier_lr: 0.1
- weight_decay: 1e-5
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/nnsiam.yaml b/solo-learn/scripts/pretrain/cifar/nnsiam.yaml
deleted file mode 100644
index 3d611e7..0000000
--- a/solo-learn/scripts/pretrain/cifar/nnsiam.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "nnsiam-cifar10" # change here for cifar100
-method: "nnsiam"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- pred_hidden_dim: 4096
- proj_output_dim: 2048
- queue_size: 65536
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.5
- classifier_lr: 0.1
- weight_decay: 1e-5
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/ressl.yaml b/solo-learn/scripts/pretrain/cifar/ressl.yaml
deleted file mode 100644
index 7272f62..0000000
--- a/solo-learn/scripts/pretrain/cifar/ressl.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-defaults:
- - _self_
- - augmentations: ressl.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "ressl-cifar10" # change here for cifar100
-method: "ressl"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_output_dim: 256
- proj_hidden_dim: 4096
- base_tau_momentum: 0.99
- final_tau_momentum: 1.0
- momentum_classifier:
- temperature_q: 0.1
- temperature_k: 0.04
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.05
- classifier_lr: 0.1
- weight_decay: 1e-4
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/simclr.yaml b/solo-learn/scripts/pretrain/cifar/simclr.yaml
deleted file mode 100644
index 0531365..0000000
--- a/solo-learn/scripts/pretrain/cifar/simclr.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "simclr-cifar10" # change here for cifar100
-method: "simclr"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 256
- temperature: 0.2
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.4
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/simsiam.yaml b/solo-learn/scripts/pretrain/cifar/simsiam.yaml
deleted file mode 100644
index dec94d4..0000000
--- a/solo-learn/scripts/pretrain/cifar/simsiam.yaml
+++ /dev/null
@@ -1,50 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric_weak.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "simsiam-cifar10" # change here for cifar100
-method: "simsiam"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- pred_hidden_dim: 512
- temperature: 0.2
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.5
- classifier_lr: 0.1
- weight_decay: 1e-5
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/supcon.yaml b/solo-learn/scripts/pretrain/cifar/supcon.yaml
deleted file mode 100644
index 365317b..0000000
--- a/solo-learn/scripts/pretrain/cifar/supcon.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "supcon-cifar10" # change here for cifar100
-method: "supcon"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 256
- temperature: 0.2
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 256
- lr: 0.4
- classifier_lr: 0.1
- weight_decay: 1e-5
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/swav.yaml b/solo-learn/scripts/pretrain/cifar/swav.yaml
deleted file mode 100644
index 01d6c43..0000000
--- a/solo-learn/scripts/pretrain/cifar/swav.yaml
+++ /dev/null
@@ -1,57 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "swav-cifar10" # change here for cifar100
-method: "swav"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- queue_size: 3840
- proj_output_dim: 128
- num_prototypes: 3000
- epoch_queue_starts: 50
- freeze_prototypes_epochs: 2
- temperature: 0.1
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.6
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/vibcreg.yaml b/solo-learn/scripts/pretrain/cifar/vibcreg.yaml
deleted file mode 100644
index ebc2404..0000000
--- a/solo-learn/scripts/pretrain/cifar/vibcreg.yaml
+++ /dev/null
@@ -1,77 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "vibcreg-cifar10" # change here for cifar100
-method: "vibcreg"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- sim_loss_weight: 25.0
- var_loss_weight: 25.0
- cov_loss_weight: 200.0
- iternorm: True
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-augmentations:
- - rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.1
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 32
- num_crops: 2
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/vicreg.yaml b/solo-learn/scripts/pretrain/cifar/vicreg.yaml
deleted file mode 100644
index 0a8db31..0000000
--- a/solo-learn/scripts/pretrain/cifar/vicreg.yaml
+++ /dev/null
@@ -1,83 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "vicreg-cifar10" # change here for cifar100
-method: "vicreg"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- sim_loss_weight: 25.0
- var_loss_weight: 25.0
- cov_loss_weight: 1.0
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-augmentations:
- - rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- enabled: True
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- prob: 0.8
- grayscale:
- enabled: True
- prob: 0.2
- gaussian_blur:
- enabled: False
- prob: 0.0
- solarization:
- enabled: True
- prob: 0.1
- equalization:
- enabled: False
- prob: 0.0
- horizontal_flip:
- enabled: True
- prob: 0.5
- crop_size: 32
- num_crops: 2
-optimizer:
- name: "lars"
- batch_size: 256
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/cifar/wandb/mhug.yaml b/solo-learn/scripts/pretrain/cifar/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/pretrain/cifar/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/cifar/wandb/private.yaml b/solo-learn/scripts/pretrain/cifar/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/pretrain/cifar/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/cifar/wmse.yaml b/solo-learn/scripts/pretrain/cifar/wmse.yaml
deleted file mode 100644
index 7b77e45..0000000
--- a/solo-learn/scripts/pretrain/cifar/wmse.yaml
+++ /dev/null
@@ -1,73 +0,0 @@
-defaults:
- - _self_
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "wmse-cifar10" # change here for cifar100
-method: "wmse"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 1024
- proj_output_dim: 64
- whitening_size: 128
-data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "datasets/imagenet100/val"
- format: "image_folder"
- num_workers: 4
-augmentations:
- - rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- enabled: True
- prob: 0.5
- crop_size: 32
- num_crops: 2
-optimizer:
- name: "adam"
- batch_size: 256
- lr: 2e-3
- classifier_lr: 3e-3
- weight_decay: 1e-6
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/custom/augmentations/asymmetric.yaml b/solo-learn/scripts/pretrain/custom/augmentations/asymmetric.yaml
deleted file mode 100644
index 30d8d26..0000000
--- a/solo-learn/scripts/pretrain/custom/augmentations/asymmetric.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 1.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.1
- solarization:
- prob: 0.2
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
diff --git a/solo-learn/scripts/pretrain/custom/augmentations/symmetric.yaml b/solo-learn/scripts/pretrain/custom/augmentations/symmetric.yaml
deleted file mode 100644
index a852a08..0000000
--- a/solo-learn/scripts/pretrain/custom/augmentations/symmetric.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 2
diff --git a/solo-learn/scripts/pretrain/custom/byol.yaml b/solo-learn/scripts/pretrain/custom/byol.yaml
deleted file mode 100644
index 517dcb4..0000000
--- a/solo-learn/scripts/pretrain/custom/byol.yaml
+++ /dev/null
@@ -1,63 +0,0 @@
-
-# how to configure the augmentations
-# it's also possible to copy paste here for a finer control
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "byol-custom-dataset"
-method: "byol"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 4096
- proj_output_dim: 256
- pred_hidden_dim: 8192
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: "custom"
- train_path: "PATH_TO_TRAIN_DIR"
- val_path: "PATH_TO_VAL_DIR" # remove this if there's no validation dir
- format: "dali" # data format, supports "image_folder", "dali" or "h5"
- num_workers: 4
- # set this to True if the dataset is not stored as subfolders for each class
- # if no labels are provided, "h5" is not supported
- # convert a custom dataset by following `scripts/utils/convert_imgfolder_to_h5.py`
- no_labels: False
-optimizer:
- name: "lars"
- batch_size: 64
- lr: 0.5
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/custom/wandb/mhug.yaml b/solo-learn/scripts/pretrain/custom/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/pretrain/custom/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/custom/wandb/private.yaml b/solo-learn/scripts/pretrain/custom/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/pretrain/custom/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/augmentations/asymmetric.yaml b/solo-learn/scripts/pretrain/imagenet-100-multicrop/augmentations/asymmetric.yaml
deleted file mode 100644
index b5f253f..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/augmentations/asymmetric.yaml
+++ /dev/null
@@ -1,68 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 1.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.1
- solarization:
- prob: 0.2
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 96
- num_crops: 6
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/augmentations/symmetric.yaml b/solo-learn/scripts/pretrain/imagenet-100-multicrop/augmentations/symmetric.yaml
deleted file mode 100644
index 8ca7032..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/augmentations/symmetric.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
- - rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 2
-
- - rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 96
- num_crops: 6
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/supcon.yaml b/solo-learn/scripts/pretrain/imagenet-100-multicrop/supcon.yaml
deleted file mode 100644
index 5de0a77..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/supcon.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "supcon-multicrop-imagenet100"
-method: "supcon"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 512
- temperature: 0.2
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 64
- lr: 0.5
- classifier_lr: 0.1
- weight_decay: 1e-5
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/wandb/mhug.yaml b/solo-learn/scripts/pretrain/imagenet-100-multicrop/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/wandb/private.yaml b/solo-learn/scripts/pretrain/imagenet-100-multicrop/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/imagenet-100/all4one.yml b/solo-learn/scripts/pretrain/imagenet-100/all4one.yml
deleted file mode 100644
index 8cf76c8..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/all4one.yml
+++ /dev/null
@@ -1,55 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "all4one-imagenet100"
-method: "all4one"
-backbone:
- name: "resnet18"
-method_kwargs:
- temperature: 0.2
- proj_hidden_dim: 2048
- pred_hidden_dim: 4096
- proj_output_dim: 256
- queue_size: 98340
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 1.0
- classifier_lr: 0.1
- weight_decay: 1e-5
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/reconstruction.yaml b/solo-learn/scripts/pretrain/imagenet-100/augmentations/reconstruction.yaml
deleted file mode 100644
index 2ebd9fa..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/reconstruction.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.0
- grayscale:
- prob: 0.0
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/ressl.yaml b/solo-learn/scripts/pretrain/imagenet-100/augmentations/ressl.yaml
deleted file mode 100644
index 328b15a..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/ressl.yaml
+++ /dev/null
@@ -1,41 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
-
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.0
- grayscale:
- prob: 0.0
- gaussian_blur:
- prob: 0.0
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 1
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/symmetric.yaml b/solo-learn/scripts/pretrain/imagenet-100/augmentations/symmetric.yaml
deleted file mode 100644
index a852a08..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/symmetric.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 2
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/symmetric_weak.yaml b/solo-learn/scripts/pretrain/imagenet-100/augmentations/symmetric_weak.yaml
deleted file mode 100644
index 921d7dc..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/symmetric_weak.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.08
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.4
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 2
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/vicreg.yaml b/solo-learn/scripts/pretrain/imagenet-100/augmentations/vicreg.yaml
deleted file mode 100644
index 05ec827..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/vicreg.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.5
- solarization:
- prob: 0.1
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 2
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/wmse.yaml b/solo-learn/scripts/pretrain/imagenet-100/augmentations/wmse.yaml
deleted file mode 100644
index 423e691..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/wmse.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-- rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.2
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 224
- num_crops: 2
-
-- rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- prob: 0.8
- brightness: 0.8
- contrast: 0.8
- saturation: 0.8
- hue: 0.2
- grayscale:
- prob: 0.2
- gaussian_blur:
- prob: 0.2
- solarization:
- prob: 0.0
- equalization:
- prob: 0.0
- horizontal_flip:
- prob: 0.5
- crop_size: 96
- num_crops: 6
diff --git a/solo-learn/scripts/pretrain/imagenet-100/barlow.yaml b/solo-learn/scripts/pretrain/imagenet-100/barlow.yaml
deleted file mode 100644
index ddd2da6..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/barlow.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "barlow_twins-imagenet100"
-method: "barlow_twins"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- scale_loss: 0.1
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/deepclusterv2.yaml b/solo-learn/scripts/pretrain/imagenet-100/deepclusterv2.yaml
deleted file mode 100644
index f6c023f..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/deepclusterv2.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "deepclusterv2-imagenet100"
-method: "deepclusterv2"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 128
- num_prototypes: [3000, 3000, 3000]
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.6
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
- min_lr: 0.0006
- warmup_start_lr: 0.0
- warmup_epochs: 11
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-dali:
- encode_indexes_into_labels: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/dino.yaml b/solo-learn/scripts/pretrain/imagenet-100/dino.yaml
deleted file mode 100644
index 1129e12..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/dino.yaml
+++ /dev/null
@@ -1,57 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "dino-imagenet100"
-method: "dino"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 256
- num_prototypes: 4096
- warmup_teacher_temperature_epochs: 50
-momentum:
- base_tau: 0.9995
- final_tau: 1.0
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/dino_vit.yaml b/solo-learn/scripts/pretrain/imagenet-100/dino_vit.yaml
deleted file mode 100644
index 89ff43b..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/dino_vit.yaml
+++ /dev/null
@@ -1,54 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "dino-vit-imagenet100"
-method: "dino"
-backbone:
- name: "vit_tiny"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 256
- num_prototypes: 65536
- norm_last_layer: False
-momentum:
- base_tau: 0.9995
- final_tau: 1.0
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 128
- lr: 0.005
- classifier_lr: 3e-3
- weight_decay: 1e-4
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0.00001
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/mae.yaml b/solo-learn/scripts/pretrain/imagenet-100/mae.yaml
deleted file mode 100644
index 7366cd6..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/mae.yaml
+++ /dev/null
@@ -1,54 +0,0 @@
-defaults:
- - _self_
- - augmentations: reconstruction.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mae-imagenet100"
-method: "mae"
-backbone:
- name: "vit_base"
-method_kwargs:
- decoder_embed_dim: 512
- decoder_depth: 8
- decoder_num_heads: 16
- mask_ratio: 0.75
- norm_pix_loss: True
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 128
- lr: 2.0e-4
- classifier_lr: 2.0e-4
- weight_decay: 0.05
- kwargs:
- betas: [0.9, 0.95]
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0.0
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/mocov3.yaml b/solo-learn/scripts/pretrain/imagenet-100/mocov3.yaml
deleted file mode 100644
index df5d4de..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/mocov3.yaml
+++ /dev/null
@@ -1,57 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mocov3-imagenet100"
-method: "mocov3"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 4096
- proj_output_dim: 256
- pred_hidden_dim: 4096
- temperature: 0.2
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.3
- classifier_lr: 0.3
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/mocov3_vit.yaml b/solo-learn/scripts/pretrain/imagenet-100/mocov3_vit.yaml
deleted file mode 100644
index af942c5..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/mocov3_vit.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mocov3-vit-imagenet100"
-method: "mocov3"
-backbone:
- name: "vit_small"
-method_kwargs:
- proj_hidden_dim: 4096
- proj_output_dim: 256
- pred_hidden_dim: 4096
- temperature: 0.2
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 128
- lr: 3.0e-4
- classifier_lr: 3.0e-4
- weight_decay: 0.1
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1, 2, 3, 4, 5, 6, 7]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/nnclr.yaml b/solo-learn/scripts/pretrain/imagenet-100/nnclr.yaml
deleted file mode 100644
index 422b7be..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/nnclr.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "nnclr-imagenet100"
-method: "nnclr"
-backbone:
- name: "resnet18"
-method_kwargs:
- temperature: 0.2
- proj_hidden_dim: 2048
- pred_hidden_dim: 4096
- proj_output_dim: 256
- queue_size: 65536
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.4
- classifier_lr: 0.1
- weight_decay: 1e-5
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/ressl.yaml b/solo-learn/scripts/pretrain/imagenet-100/ressl.yaml
deleted file mode 100644
index 70416d6..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/ressl.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-defaults:
- - _self_
- - augmentations: ressl.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "ressl-imagenet100"
-method: "ressl"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_output_dim: 256
- proj_hidden_dim: 4096
- base_tau_momentum: 0.99
- final_tau_momentum: 1.0
- momentum_classifier:
- temperature_q: 0.1
- temperature_k: 0.04
-momentum:
- base_tau: 0.99
- final_tau: 1.0
-data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 128
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/simsiam.yaml b/solo-learn/scripts/pretrain/imagenet-100/simsiam.yaml
deleted file mode 100644
index dab8055..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/simsiam.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-defaults:
- - _self_
- - augmentations: asymmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "simsiam-imagenet100"
-method: "simsiam"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- pred_hidden_dim: 512
- temperature: 0.2
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 128
- lr: 0.5
- classifier_lr: 0.1
- weight_decay: 1e-5
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-augmentations_cfg: "scripts/configs/defaults/augmentations/symmetric/weak.yaml"
-wandb_cfg: "scripts/configs/defaults/wandb/private.yaml"
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/supcon.yaml b/solo-learn/scripts/pretrain/imagenet-100/supcon.yaml
deleted file mode 100644
index 0b91b88..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/supcon.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "supcon-imagenet100"
-method: "supcon"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 512
- temperature: 0.2
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "sgd"
- batch_size: 128
- lr: 0.5
- classifier_lr: 0.1
- weight_decay: 1e-5
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/swav.yaml b/solo-learn/scripts/pretrain/imagenet-100/swav.yaml
deleted file mode 100644
index 1833f54..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/swav.yaml
+++ /dev/null
@@ -1,57 +0,0 @@
-defaults:
- - _self_
- - augmentations: symmetric.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "swav-imagenet100"
-method: "swav"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- queue_size: 3840
- proj_output_dim: 128
- num_prototypes: 3000
- epoch_queue_starts: 50
- freeze_prototypes_epochs: 2
- temperature: 0.1
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.6
- classifier_lr: 0.1
- weight_decay: 1e-6
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/vibcreg.yaml b/solo-learn/scripts/pretrain/imagenet-100/vibcreg.yaml
deleted file mode 100644
index ba9c891..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/vibcreg.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-defaults:
- - _self_
- - augmentations: vicreg.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "vibcreg-imagenet100"
-method: "vibcreg"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- sim_loss_weight: 25.0
- var_loss_weight: 25.0
- cov_loss_weight: 200.0
- iternorm: True
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/vicreg.yaml b/solo-learn/scripts/pretrain/imagenet-100/vicreg.yaml
deleted file mode 100644
index 68e817f..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/vicreg.yaml
+++ /dev/null
@@ -1,84 +0,0 @@
-defaults:
- - _self_
- - augmentations: vicreg.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "vicreg-imagenet100"
-method: "vicreg"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 2048
- proj_output_dim: 2048
- sim_loss_weight: 25.0
- var_loss_weight: 25.0
- cov_loss_weight: 1.0
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-augmentations:
- - rrc:
- enabled: True
- crop_min_scale: 0.2
- crop_max_scale: 1.0
- color_jitter:
- enabled: True
- brightness: 0.4
- contrast: 0.4
- saturation: 0.2
- hue: 0.1
- prob: 0.8
- grayscale:
- enabled: True
- prob: 0.2
- gaussian_blur:
- enabled: True
- prob: 0.5
- solarization:
- enabled: True
- prob: 0.1
- equalization:
- enabled: False
- prob: 0.0
- horizontal_flip:
- enabled: True
- prob: 0.5
- crop_size: 224
- num_crops: 2
-optimizer:
- name: "lars"
- batch_size: 128
- lr: 0.3
- classifier_lr: 0.1
- weight_decay: 1e-4
- kwargs:
- clip_lr: True
- eta: 0.02
- exclude_bias_n_norm: True
-scheduler:
- name: "warmup_cosine"
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet-100/wandb/mhug.yaml b/solo-learn/scripts/pretrain/imagenet-100/wandb/mhug.yaml
deleted file mode 100644
index c842e44..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/wandb/mhug.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: unitn-mhug
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/imagenet-100/wandb/private.yaml b/solo-learn/scripts/pretrain/imagenet-100/wandb/private.yaml
deleted file mode 100644
index ad4e200..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/wandb/private.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
-enabled: True
-entity: None
-project: "gen-ssl"
diff --git a/solo-learn/scripts/pretrain/imagenet-100/wmse.yaml b/solo-learn/scripts/pretrain/imagenet-100/wmse.yaml
deleted file mode 100644
index 3e17478..0000000
--- a/solo-learn/scripts/pretrain/imagenet-100/wmse.yaml
+++ /dev/null
@@ -1,50 +0,0 @@
-defaults:
- - _self_
- - augmentations: wmse.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "wmse-imagenet100"
-method: "wmse"
-backbone:
- name: "resnet18"
-method_kwargs:
- proj_hidden_dim: 1024
- proj_output_dim: 64
- whitening_size: 128
-data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
- format: "dali"
- num_workers: 4
-optimizer:
- name: "adam"
- batch_size: 128
- lr: 2e-3
- classifier_lr: 3e-3
- weight_decay: 1e-6
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric.yaml b/solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric.yaml
index f01fed5..bde83fb 100644
--- a/solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric.yaml
@@ -23,7 +23,7 @@
prob: 0.5
crop_size: 224
num_crops: 1
-
+
- rrc:
enabled: True
crop_min_scale: 0.08
diff --git a/solo-learn/scripts/pretrain/imagenet-100/augmentations/asymmetric.yaml b/solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric_weak.yaml
similarity index 66%
rename from solo-learn/scripts/pretrain/imagenet-100/augmentations/asymmetric.yaml
rename to solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric_weak.yaml
index 30d8d26..186f539 100644
--- a/solo-learn/scripts/pretrain/imagenet-100/augmentations/asymmetric.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/augmentations/synthetic_symmetric_weak.yaml
@@ -1,3 +1,6 @@
+# Augmentations should be defined separately for synthetic and real data in the config files. That's why
+# we have two lists of augmentations in defined here. The first list is for real data and the second list
+# is for synthetic data.
- rrc:
enabled: True
crop_min_scale: 0.08
@@ -6,12 +9,12 @@
prob: 0.8
brightness: 0.4
contrast: 0.4
- saturation: 0.2
+ saturation: 0.4
hue: 0.1
grayscale:
prob: 0.2
gaussian_blur:
- prob: 1.0
+ prob: 0.5
solarization:
prob: 0.0
equalization:
@@ -29,14 +32,14 @@
prob: 0.8
brightness: 0.4
contrast: 0.4
- saturation: 0.2
+ saturation: 0.4
hue: 0.1
grayscale:
prob: 0.2
gaussian_blur:
- prob: 0.1
+ prob: 0.5
solarization:
- prob: 0.2
+ prob: 0.0
equalization:
prob: 0.0
horizontal_flip:
diff --git a/solo-learn/scripts/pretrain/imagenet/barlow.yaml b/solo-learn/scripts/pretrain/imagenet/barlow.yaml
index 1eac333..4d06ab0 100644
--- a/solo-learn/scripts/pretrain/imagenet/barlow.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/barlow.yaml
@@ -11,7 +11,7 @@ hydra:
run:
dir: .
-name: "barlow_twins-imagenet"
+name: "barlow-imagenet"
method: "barlow_twins"
backbone:
name: "resnet50"
@@ -22,8 +22,8 @@ method_kwargs:
scale_loss: 0.048
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
@@ -40,10 +40,10 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: True
+ enabled: False
# overwrite PL stuff
max_epochs: 100
diff --git a/solo-learn/scripts/pretrain/imagenet-100/byol.yaml b/solo-learn/scripts/pretrain/imagenet/barlow_diff.yaml
similarity index 56%
rename from solo-learn/scripts/pretrain/imagenet-100/byol.yaml
rename to solo-learn/scripts/pretrain/imagenet/barlow_diff.yaml
index 35cd7d5..9816a1f 100644
--- a/solo-learn/scripts/pretrain/imagenet-100/byol.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/barlow_diff.yaml
@@ -11,45 +11,47 @@ hydra:
run:
dir: .
-name: "byol-imagenet100"
-method: "byol"
+name: "barlow-imagenet-diffusion"
+method: "barlow_twins"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
- proj_output_dim: 256
- pred_hidden_dim: 8192
-momentum:
- base_tau: 0.99
- final_tau: 1.0
+ proj_output_dim: 4096
+ lamb: 0.0051
+ scale_loss: 0.048
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
format: "dali"
num_workers: 4
optimizer:
name: "lars"
- batch_size: 128
- lr: 0.5
+ batch_size: 64
+ lr: 0.8
classifier_lr: 0.1
- weight_decay: 1e-6
+ weight_decay: 1.5e-6
kwargs:
- clip_lr: True
- eta: 0.02
+ clip_lr: False
+ eta: 0.001
exclude_bias_n_norm: True
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: True
+ enabled: False
# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/byol.yaml b/solo-learn/scripts/pretrain/imagenet/barlow_icgan.yaml
similarity index 58%
rename from solo-learn/scripts/pretrain/imagenet-100-multicrop/byol.yaml
rename to solo-learn/scripts/pretrain/imagenet/barlow_icgan.yaml
index f17ede0..5e3f533 100644
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/byol.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/barlow_icgan.yaml
@@ -11,45 +11,47 @@ hydra:
run:
dir: .
-name: "byol-multicrop-imagenet100"
-method: "byol"
+name: "barlow-imagenet-icgan"
+method: "barlow_twins"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
- proj_output_dim: 256
- pred_hidden_dim: 8192
-momentum:
- base_tau: 0.99
- final_tau: 1.0
+ proj_output_dim: 4096
+ lamb: 0.0051
+ scale_loss: 0.048
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: ICGAN_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
format: "dali"
num_workers: 4
optimizer:
name: "lars"
batch_size: 64
- lr: 0.5
+ lr: 0.8
classifier_lr: 0.1
- weight_decay: 1e-5
+ weight_decay: 1.5e-6
kwargs:
- clip_lr: True
- eta: 0.02
+ clip_lr: False
+ eta: 0.001
exclude_bias_n_norm: True
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: True
+ enabled: False
# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/imagenet/byol.yaml b/solo-learn/scripts/pretrain/imagenet/byol.yaml
index 54044dc..292cc59 100644
--- a/solo-learn/scripts/pretrain/imagenet/byol.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/byol.yaml
@@ -24,16 +24,16 @@ momentum:
final_tau: 1.0
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
name: "lars"
- batch_size: 64
- lr: 0.45
+ batch_size: 256
+ lr: 0.2
classifier_lr: 0.2
- weight_decay: 1e-6
+ weight_decay: 15e-7
kwargs:
clip_lr: False
eta: 0.001
@@ -42,7 +42,7 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/solo_trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
diff --git a/solo-learn/scripts/pretrain/cifar/nnbyol.yaml b/solo-learn/scripts/pretrain/imagenet/byol_diff.yaml
similarity index 61%
rename from solo-learn/scripts/pretrain/cifar/nnbyol.yaml
rename to solo-learn/scripts/pretrain/imagenet/byol_diff.yaml
index 5cec47b..e5d8fa1 100644
--- a/solo-learn/scripts/pretrain/cifar/nnbyol.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/byol_diff.yaml
@@ -11,47 +11,51 @@ hydra:
run:
dir: .
-name: "nnbyol-cifar10" # change here for cifar100
-method: "nnbyol"
+name: "byol-imagenet-diffusion"
+method: "byol"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
- pred_hidden_dim: 4096
proj_output_dim: 256
- queue_size: 65536
+ pred_hidden_dim: 4096
momentum:
base_tau: 0.99
final_tau: 1.0
data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
+ format: "dali"
num_workers: 4
optimizer:
name: "lars"
batch_size: 256
- lr: 1.0
- classifier_lr: 0.1
- weight_decay: 1e-5
+ lr: 0.2
+ classifier_lr: 0.2
+ weight_decay: 15e-7
kwargs:
- clip_lr: True
- eta: 0.02
+ clip_lr: False
+ eta: 0.001
exclude_bias_n_norm: True
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
precision: 16-mixed
+accumulate_grad_batches: 16
diff --git a/solo-learn/scripts/pretrain/cifar/byol.yaml b/solo-learn/scripts/pretrain/imagenet/byol_icgan.yaml
similarity index 63%
rename from solo-learn/scripts/pretrain/cifar/byol.yaml
rename to solo-learn/scripts/pretrain/imagenet/byol_icgan.yaml
index eec6949..d7aaaff 100644
--- a/solo-learn/scripts/pretrain/cifar/byol.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/byol_icgan.yaml
@@ -11,10 +11,10 @@ hydra:
run:
dir: .
-name: "byol-cifar10" # change here for cifar100
+name: "byol-imagenet-icgan"
method: "byol"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
proj_output_dim: 256
@@ -23,34 +23,39 @@ momentum:
base_tau: 0.99
final_tau: 1.0
data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: ICGAN_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
+ format: "dali"
num_workers: 4
optimizer:
name: "lars"
batch_size: 256
- lr: 1.0
- classifier_lr: 0.1
- weight_decay: 1e-5
+ lr: 0.2
+ classifier_lr: 0.2
+ weight_decay: 15e-7
kwargs:
- clip_lr: True
- eta: 0.02
+ clip_lr: False
+ eta: 0.001
exclude_bias_n_norm: True
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
precision: 16-mixed
+accumulate_grad_batches: 16
diff --git a/solo-learn/scripts/pretrain/imagenet/mae.yaml b/solo-learn/scripts/pretrain/imagenet/mae.yaml
deleted file mode 100644
index 7709fb3..0000000
--- a/solo-learn/scripts/pretrain/imagenet/mae.yaml
+++ /dev/null
@@ -1,57 +0,0 @@
-defaults:
- - _self_
- - augmentations: reconstruction.yaml
- - wandb: private.yaml
- - override hydra/hydra_logging: disabled
- - override hydra/job_logging: disabled
-
-# disable hydra outputs
-hydra:
- output_subdir: null
- run:
- dir: .
-
-name: "mae-imagenet"
-method: "mae"
-backbone:
- name: "vit_base"
-method_kwargs:
- decoder_embed_dim: 512
- decoder_depth: 8
- decoder_num_heads: 16
- mask_ratio: 0.75
- norm_pix_loss: True
-momentum:
- base_tau: 0.9995
- final_tau: 1.0
-data:
- dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
- format: "image_folder"
- num_workers: 4
-optimizer:
- name: "adamw"
- batch_size: 64
- lr: 2.0e-4
- classifier_lr: 2.0e-4
- weight_decay: 0.05
- kwargs:
- betas: [0.9, 0.95]
-scheduler:
- name: "warmup_cosine"
- warmup_start_lr: 0.0
-checkpoint:
- enabled: True
- dir: "trained_models"
- frequency: 1
-auto_resume:
- enabled: True
-
-# overwrite PL stuff
-max_epochs: 400
-devices: 4
-sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet/mocov2plus.yaml b/solo-learn/scripts/pretrain/imagenet/moco.yaml
similarity index 84%
rename from solo-learn/scripts/pretrain/imagenet/mocov2plus.yaml
rename to solo-learn/scripts/pretrain/imagenet/moco.yaml
index 4ed2187..c2152a4 100644
--- a/solo-learn/scripts/pretrain/imagenet/mocov2plus.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/moco.yaml
@@ -11,7 +11,7 @@ hydra:
run:
dir: .
-name: "mocov2plus-imagenet"
+name: "moco-imagenet"
method: "mocov2plus"
backbone:
name: "resnet50"
@@ -25,8 +25,8 @@ momentum:
final_tau: 0.999
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
@@ -39,10 +39,10 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: True
+ enabled: False
# overwrite PL stuff
max_epochs: 100
diff --git a/solo-learn/scripts/pretrain/cifar/mocov2plus.yaml b/solo-learn/scripts/pretrain/imagenet/moco_diff.yaml
similarity index 57%
rename from solo-learn/scripts/pretrain/cifar/mocov2plus.yaml
rename to solo-learn/scripts/pretrain/imagenet/moco_diff.yaml
index 8c990b1..732f162 100644
--- a/solo-learn/scripts/pretrain/cifar/mocov2plus.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/moco_diff.yaml
@@ -1,6 +1,6 @@
defaults:
- _self_
- - augmentations: symmetric_weak.yaml
+ - augmentations: synthetic_symmetric_weak.yaml
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
@@ -11,42 +11,46 @@ hydra:
run:
dir: .
-name: "mocov2plus-cifar10" # change here for cifar100
+name: "moco-imagenet-diff"
method: "mocov2plus"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 2048
proj_output_dim: 256
- queue_size: 32768
+ queue_size: 65536
temperature: 0.2
momentum:
base_tau: 0.99
final_tau: 0.999
data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
+ format: "dali"
num_workers: 4
optimizer:
name: "sgd"
- batch_size: 256
+ batch_size: 64
lr: 0.3
- classifier_lr: 0.3
- weight_decay: 1e-4
+ classifier_lr: 0.4
+ weight_decay: 3e-5
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: True
+ enabled: False
# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/imagenet-100/mocov2plus.yaml b/solo-learn/scripts/pretrain/imagenet/moco_icgan.yaml
similarity index 61%
rename from solo-learn/scripts/pretrain/imagenet-100/mocov2plus.yaml
rename to solo-learn/scripts/pretrain/imagenet/moco_icgan.yaml
index afbe0b4..efbf1f5 100644
--- a/solo-learn/scripts/pretrain/imagenet-100/mocov2plus.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/moco_icgan.yaml
@@ -1,6 +1,6 @@
defaults:
- _self_
- - augmentations: symmetric_weak.yaml
+ - augmentations: synthetic_symmetric_weak.yaml
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
@@ -11,10 +11,10 @@ hydra:
run:
dir: .
-name: "mocov2plus-imagenet100"
+name: "moco-imagenet-icgan"
method: "mocov2plus"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 2048
proj_output_dim: 256
@@ -24,29 +24,33 @@ momentum:
base_tau: 0.99
final_tau: 0.999
data:
- dataset: imagenet100
- train_path: "./datasets/imagenet-100/train"
- val_path: "./datasets/imagenet-100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: ICGAN_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
format: "dali"
num_workers: 4
optimizer:
name: "sgd"
- batch_size: 128
+ batch_size: 64
lr: 0.3
- classifier_lr: 0.3
- weight_decay: 1e-4
+ classifier_lr: 0.4
+ weight_decay: 3e-5
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
- enabled: True
+ enabled: False
# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/imagenet/simclr.yaml b/solo-learn/scripts/pretrain/imagenet/simclr.yaml
index f201b04..5bdd082 100644
--- a/solo-learn/scripts/pretrain/imagenet/simclr.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/simclr.yaml
@@ -21,16 +21,16 @@ method_kwargs:
temperature: 0.2
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
format: "dali"
num_workers: 4
optimizer:
name: "lars"
- batch_size: 64
+ batch_size: 256
lr: 0.3
classifier_lr: 0.1
- weight_decay: 1e-4
+ weight_decay: 1e-6
kwargs:
clip_lr: True
eta: 0.02
@@ -39,7 +39,7 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/solo_trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
diff --git a/solo-learn/scripts/pretrain/imagenet-100/simclr.yaml b/solo-learn/scripts/pretrain/imagenet/simclr_diff.yaml
similarity index 65%
rename from solo-learn/scripts/pretrain/imagenet-100/simclr.yaml
rename to solo-learn/scripts/pretrain/imagenet/simclr_diff.yaml
index 8a07198..0457525 100644
--- a/solo-learn/scripts/pretrain/imagenet-100/simclr.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/simclr_diff.yaml
@@ -1,6 +1,6 @@
defaults:
- _self_
- - augmentations: symmetric.yaml
+ - augmentations: synthetic_symmetric.yaml
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
@@ -11,26 +11,30 @@ hydra:
run:
dir: .
-name: "simclr-imagenet100"
+name: "simclr-imagenet-diff"
method: "simclr"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
proj_output_dim: 512
temperature: 0.2
data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
format: "dali"
num_workers: 4
optimizer:
name: "lars"
- batch_size: 128
+ batch_size: 256
lr: 0.3
classifier_lr: 0.1
- weight_decay: 1e-4
+ weight_decay: 1e-6
kwargs:
clip_lr: True
eta: 0.02
@@ -39,14 +43,14 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/imagenet-100-multicrop/simclr.yaml b/solo-learn/scripts/pretrain/imagenet/simclr_icgan.yaml
similarity index 65%
rename from solo-learn/scripts/pretrain/imagenet-100-multicrop/simclr.yaml
rename to solo-learn/scripts/pretrain/imagenet/simclr_icgan.yaml
index cbd804b..fd8b662 100644
--- a/solo-learn/scripts/pretrain/imagenet-100-multicrop/simclr.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/simclr_icgan.yaml
@@ -1,6 +1,6 @@
defaults:
- _self_
- - augmentations: symmetric.yaml
+ - augmentations: synthetic_symmetric.yaml
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
@@ -11,26 +11,30 @@ hydra:
run:
dir: .
-name: "simclr-multicrop-imagenet100"
+name: "simclr-imagenet-icgan"
method: "simclr"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
proj_output_dim: 512
temperature: 0.2
data:
- dataset: imagenet100
- train_path: "datasets/imagenet100/train"
- val_path: "datasets/imagenet100/val"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: ICGAN_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
format: "dali"
num_workers: 4
optimizer:
name: "lars"
- batch_size: 64
+ batch_size: 256
lr: 0.3
classifier_lr: 0.1
- weight_decay: 1e-4
+ weight_decay: 1e-6
kwargs:
clip_lr: True
eta: 0.02
@@ -39,14 +43,14 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
-max_epochs: 400
-devices: [0, 1]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/cifar/mocov3.yaml b/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_25.yaml
similarity index 60%
rename from solo-learn/scripts/pretrain/cifar/mocov3.yaml
rename to solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_25.yaml
index 9eccbd2..b4752eb 100644
--- a/solo-learn/scripts/pretrain/cifar/mocov3.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_25.yaml
@@ -1,6 +1,6 @@
defaults:
- _self_
- - augmentations: asymmetric.yaml
+ - augmentations: synthetic_symmetric.yaml
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
@@ -11,29 +11,29 @@ hydra:
run:
dir: .
-name: "mocov3-cifar10" # change here for cifar100
-method: "mocov3"
+name: "simclr-imagenet-diff-25"
+method: "simclr"
backbone:
- name: "resnet18"
+ name: "resnet50"
method_kwargs:
proj_hidden_dim: 4096
- proj_output_dim: 256
- pred_hidden_dim: 4096
+ proj_output_dim: 512
temperature: 0.2
-momentum:
- base_tau: 0.99
- final_tau: 1.0
data:
- dataset: cifar10 # change here for cifar100
- train_path: "./datasets"
- val_path: "./datasets"
- format: "image_folder"
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 0.25
+ format: "dali"
num_workers: 4
optimizer:
name: "lars"
batch_size: 256
lr: 0.3
- classifier_lr: 0.3
+ classifier_lr: 0.1
weight_decay: 1e-6
kwargs:
clip_lr: True
@@ -43,14 +43,14 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
# overwrite PL stuff
-max_epochs: 1000
-devices: [0]
+max_epochs: 100
+devices: 4
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
diff --git a/solo-learn/scripts/pretrain/imagenet/simclr_synthetic.yaml b/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_50.yaml
similarity index 73%
rename from solo-learn/scripts/pretrain/imagenet/simclr_synthetic.yaml
rename to solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_50.yaml
index e1fbc2d..fefa117 100644
--- a/solo-learn/scripts/pretrain/imagenet/simclr_synthetic.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_50.yaml
@@ -11,7 +11,7 @@ hydra:
run:
dir: .
-name: "simclr-synthetic-imagenet"
+name: "simclr-imagenet-diff-50"
method: "simclr"
backbone:
name: "resnet50"
@@ -21,9 +21,9 @@ method_kwargs:
temperature: 0.2
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
- synthetic_path: "/projects/imagenet_synthetic/arashaf_stablediff_batched"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
synthetic_index_min: 0
synthetic_index_max: 9
generative_augmentation_prob: 0.5
@@ -31,10 +31,10 @@ data:
num_workers: 4
optimizer:
name: "lars"
- batch_size: 64
+ batch_size: 256
lr: 0.3
classifier_lr: 0.1
- weight_decay: 1e-4
+ weight_decay: 1e-6
kwargs:
clip_lr: True
eta: 0.02
@@ -43,7 +43,7 @@ scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
- dir: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/solo_trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
diff --git a/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_75.yaml b/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_75.yaml
new file mode 100644
index 0000000..2e7a3bc
--- /dev/null
+++ b/solo-learn/scripts/pretrain/imagenet/simclr_paper_synth_stable_75.yaml
@@ -0,0 +1,57 @@
+defaults:
+ - _self_
+ - augmentations: synthetic_symmetric.yaml
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+
+name: "simclr-imagenet-diff-75"
+method: "simclr"
+backbone:
+ name: "resnet50"
+method_kwargs:
+ proj_hidden_dim: 4096
+ proj_output_dim: 512
+ temperature: 0.2
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 0.75
+ format: "dali"
+ num_workers: 4
+optimizer:
+ name: "lars"
+ batch_size: 256
+ lr: 0.3
+ classifier_lr: 0.1
+ weight_decay: 1e-6
+ kwargs:
+ clip_lr: True
+ eta: 0.02
+ exclude_bias_n_norm: True
+scheduler:
+ name: "warmup_cosine"
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: "gpu"
+strategy: "ddp"
+precision: 16-mixed
diff --git a/solo-learn/scripts/pretrain/imagenet/simsiam.yaml b/solo-learn/scripts/pretrain/imagenet/simsiam.yaml
index 5439d79..1544197 100644
--- a/solo-learn/scripts/pretrain/imagenet/simsiam.yaml
+++ b/solo-learn/scripts/pretrain/imagenet/simsiam.yaml
@@ -4,17 +4,15 @@ defaults:
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
-
# disable hydra outputs
hydra:
output_subdir: null
run:
dir: .
-
-name: "simsiam-imagenet"
-method: "simsiam"
+name: “simsiam-imagenet”
+method: “simsiam”
backbone:
- name: "resnet50"
+ name: “resnet50”
method_kwargs:
proj_hidden_dim: 4096
proj_output_dim: 4096
@@ -22,29 +20,28 @@ method_kwargs:
temperature: 0.2
data:
dataset: imagenet
- train_path: "/datasets/imagenet/train"
- val_path: "/datasets/imagenet/val"
- format: "dali"
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ format: “dali”
num_workers: 4
optimizer:
- name: "sgd"
+ name: “sgd”
batch_size: 64
lr: 0.5
classifier_lr: 0.1
weight_decay: 1e-5
scheduler:
- name: "warmup_cosine"
+ name: “warmup_cosine”
checkpoint:
enabled: True
- dir: "/projects/imagenet_synthetic/model_checkpoints/solo-learn/solo_trained_models"
+ dir: SAVE_PATH
frequency: 1
auto_resume:
enabled: True
-
# overwrite PL stuff
max_epochs: 100
devices: 4
sync_batchnorm: True
-accelerator: "gpu"
-strategy: "ddp"
-precision: 16-mixed
+accelerator: “gpu”
+strategy: “ddp”
+precision: 16-mixed
\ No newline at end of file
diff --git a/solo-learn/scripts/pretrain/imagenet/simsiam_diff.yaml b/solo-learn/scripts/pretrain/imagenet/simsiam_diff.yaml
new file mode 100644
index 0000000..975e3c8
--- /dev/null
+++ b/solo-learn/scripts/pretrain/imagenet/simsiam_diff.yaml
@@ -0,0 +1,51 @@
+defaults:
+ - _self_
+ - augmentations: asymmetric.yaml
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+name: “simsiam-imagenet-diff”
+method: “simsiam”
+backbone:
+ name: “resnet50”
+method_kwargs:
+ proj_hidden_dim: 4096
+ proj_output_dim: 4096
+ pred_hidden_dim: 512
+ temperature: 0.2
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: DIFFUSION_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
+ format: “dali”
+ num_workers: 4
+optimizer:
+ name: “sgd”
+ batch_size: 64
+ lr: 0.5
+ classifier_lr: 0.1
+ weight_decay: 1e-5
+scheduler:
+ name: “warmup_cosine”
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: “gpu”
+strategy: “ddp”
+precision: 16-mixed
\ No newline at end of file
diff --git a/solo-learn/scripts/pretrain/imagenet/simsiam_icgan.yaml b/solo-learn/scripts/pretrain/imagenet/simsiam_icgan.yaml
new file mode 100644
index 0000000..f9eacfb
--- /dev/null
+++ b/solo-learn/scripts/pretrain/imagenet/simsiam_icgan.yaml
@@ -0,0 +1,51 @@
+defaults:
+ - _self_
+ - augmentations: asymmetric.yaml
+ - wandb: private.yaml
+ - override hydra/hydra_logging: disabled
+ - override hydra/job_logging: disabled
+# disable hydra outputs
+hydra:
+ output_subdir: null
+ run:
+ dir: .
+name: “simsiam-imagenet-icgan”
+method: “simsiam”
+backbone:
+ name: “resnet50”
+method_kwargs:
+ proj_hidden_dim: 4096
+ proj_output_dim: 4096
+ pred_hidden_dim: 512
+ temperature: 0.2
+data:
+ dataset: imagenet
+ train_path: TRAIN_PATH
+ val_path: VAL_PATH
+ synthetic_path: ICGAN_SYNTHETIC_PATH
+ synthetic_index_min: 0
+ synthetic_index_max: 9
+ generative_augmentation_prob: 1
+ format: “dali”
+ num_workers: 4
+optimizer:
+ name: “sgd”
+ batch_size: 64
+ lr: 0.5
+ classifier_lr: 0.1
+ weight_decay: 1e-5
+scheduler:
+ name: “warmup_cosine”
+checkpoint:
+ enabled: True
+ dir: SAVE_PATH
+ frequency: 1
+auto_resume:
+ enabled: True
+# overwrite PL stuff
+max_epochs: 100
+devices: 4
+sync_batchnorm: True
+accelerator: “gpu”
+strategy: “ddp”
+precision: 16-mixed
\ No newline at end of file
diff --git a/solo-learn/scripts/umap/imagenet-100/umap.sh b/solo-learn/scripts/umap/imagenet-100/umap.sh
deleted file mode 100644
index 04bbb4d..0000000
--- a/solo-learn/scripts/umap/imagenet-100/umap.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-python3 main_umap.py \
- --dataset imagenet100 \
- --train_data_path ./datasets/imagenet-100/train \
- --val_data_path ./datasets/imagenet-100/val \
- --batch_size 16 \
- --num_workers 10 \
- --pretrained_checkpoint_dir $1
diff --git a/solo-learn/zoo/cifar10.sh b/solo-learn/zoo/cifar10.sh
deleted file mode 100644
index 9fc623f..0000000
--- a/solo-learn/zoo/cifar10.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-mkdir trained_models
-cd trained_models
-mkdir cifar10
-cd cifar10
-
-# Barlow Twins
-mkdir barlow_twins
-cd barlow_twins
-gdown https://drive.google.com/uc?id=1x7y44E05vuobibfObT4n3jqLI8QNVESV
-gdown https://drive.google.com/uc?id=1Mxfq2YGQ53bNRV2fNYzvYIneM5ZGeb2h
-cd ..
-
-# BYOL
-mkdir byol
-cd byol
-gdown https://drive.google.com/uc?id=1zOE8O2yPyhE23LMoesMoDPdLyh1qbI8k
-gdown https://drive.google.com/uc?id=1l1XIWE1ailKzsQnUPGDgyvK0escOsta6
-cd ..
-
-# DeepCluster V2
-mkdir deepclusterv2
-cd deepclusterv2
-gdown https://drive.google.com/uc?id=13L_QlwrBRJhdeCaVdgkRYWfvoh4PIWwj
-gdown https://drive.google.com/uc?id=17jRJ-LC56uWRuNluWXecXHjTxomuGs_T
-cd ..
-
-# DINO
-mkdir dino
-cd dino
-gdown https://drive.google.com/uc?id=1Wv9w5j22YitGAWi4p3IJYzLVo4fQkpSu
-gdown https://drive.google.com/uc?id=1PBElgMN5gjZsK3o1L55jNnb5A1ebbOvu
-cd ..
-
-# MoCo V2+
-mkdir mocov2plus
-cd mocov2plus
-gdown https://drive.google.com/uc?id=1viIUTHmLdozDWtzMicV4oOyC50iL2QDU
-gdown https://drive.google.com/uc?id=1ZLpgK13N8rgBxvqRbyGFd_8mF03pStIx
-cd ..
-
-# MoCo V3
-mkdir mocov3
-cd mocov3
-gdown https://drive.google.com/uc?id=1EFHWBLYFsglZYPYsBc0YrtihrzBZRe7h
-gdown https://drive.google.com/uc?id=1Gb6TCWoY2aN8AK3UnuZu4IxIktqbDybP
-cd ..
-
-# NNCLR
-mkdir nnclr
-cd nnclr
-gdown https://drive.google.com/uc?id=1zKReUmJ35vRnQxfSxn7yRVRW_oy3LUDF
-gdown https://drive.google.com/uc?id=1UyI9r19PoFGqHjd5r1UEpstCSTkleja7
-cd ..
-
-# ReSSL
-mkdir ressl
-cd ressl
-gdown https://drive.google.com/uc?id=1UdDWvgpyvj3VFVm0lq-WrGj0-GTcEpHq
-gdown https://drive.google.com/uc?id=1XkkYUuEI79__4GpCCDhuFEbv0BbRdCBh
-cd ..
-
-# SimCLR
-mkdir simclr
-cd simclr
-gdown https://drive.google.com/uc?id=15fI7gb9M92jZWBZoGLvarYDiNYK3RN2O
-gdown https://drive.google.com/uc?id=1HMJof4v2B5S-khepI_x8bgFv72I5KMc9
-cd ..
-
-# Simsiam
-mkdir simsiam
-cd simsiam
-gdown https://drive.google.com/uc?id=1ZMGGTziK0DbCP43fDx2rPFrtJxCLJDmb
-gdown https://drive.google.com/uc?id=1hh1QrQiWfRej-8D6L67T_F7Je9-EUUg2
-cd ..
-
-# SupCon
-mkdir supcon
-cd supcon
-gdown https://drive.google.com/uc?id=1tkk_r7tYozLgf9khW6LiGxaTvJQ4c5sA
-gdown https://drive.google.com/uc?id=1OhZul-rtBVUOqvOkORk8HgOLIXGCNEzB
-cd ..
-
-# SwAV
-mkdir swav
-cd swav
-gdown https://drive.google.com/uc?id=1CPok55wwN_4QecEjubdLeBo_9qWSJTHw
-gdown https://drive.google.com/uc?id=1t59f1Q8ifx8tAySGpD2pmvogNcR1USEo
-cd ..
-
-# VIbCReg
-mkdir vibcreg
-cd vibcreg
-gdown https://drive.google.com/uc?id=1dHsKrhCcwWIXFwQJ4oVPgLcEcT3SecQV
-gdown https://drive.google.com/uc?id=1OPsUf8VnKo5w6T8-rEQFaodUNxvQ8CTT
-cd ..
-
-# VICReg
-mkdir vicreg
-cd vicreg
-gdown https://drive.google.com/uc?id=1TeliMNt5bOchqJj2u_JjB0_ahKB5LKi5
-gdown https://drive.google.com/uc?id=1dsdPL-5QNS9LyHypYN6VQfEuiNWLKJqN
-cd ..
-
-# W-MSE
-mkdir wmse
-cd wmse
-gdown https://drive.google.com/uc?id=1jTjpmVTi9rtzy3NPEEp_61py-jeHy5fi
-gdown https://drive.google.com/uc?id=1YLuqazfSDOruSiu4Kl6OAexDnt5LKEIT
-cd ..
diff --git a/solo-learn/zoo/cifar100.sh b/solo-learn/zoo/cifar100.sh
deleted file mode 100644
index 35673bb..0000000
--- a/solo-learn/zoo/cifar100.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-mkdir trained_models
-cd trained_models
-mkdir cifar100
-cd cifar100
-
-# Barlow Twins
-mkdir barlow_twins
-cd barlow_twins
-gdown https://drive.google.com/uc?id=17cZt3DorfiCYb0ZauLHv0iM-YDGYa-mE
-gdown https://drive.google.com/uc?id=17Me99dh-XfTV-fniXn0Cy-ZcwGa9dRZe
-cd ..
-
-# BYOL
-mkdir byol
-cd byol
-gdown https://drive.google.com/uc?id=1fE7TdRboFJnYXr8JSY_tGmuFGitI8l23
-gdown https://drive.google.com/uc?id=1qsBJoO1ROAEUeQtvl8hOBDnLXZKY8Ziy
-cd ..
-
-# DeepCluster V2
-mkdir deepclusterv2
-cd deepclusterv2
-gdown https://drive.google.com/uc?id=1grFfh0aaVYpeuYbgFYB4rmfj9uvXhYSd
-gdown https://drive.google.com/uc?id=12jBsv8Fd2vk6OD5khbl4qp7szfSCiERD
-cd ..
-
-# DINO
-mkdir dino
-cd dino
-gdown https://drive.google.com/uc?id=16gdp5L_a9BVcRvcU4f-NUJCsIpX3Oecr
-gdown https://drive.google.com/uc?id=1M4UVug_ARfNW_sjnRbc0KBceBXVKVVxH
-cd ..
-
-# MoCo V2+
-mkdir mocov2plus
-cd mocov2plus
-gdown https://drive.google.com/uc?id=1KNkCA2Hr70QsmOSif9_UUndFerOb7Jft
-gdown https://drive.google.com/uc?id=1T_SpFAEhZap2fvKnUvk8hzL-C7Nzad93
-cd ..
-
-# MoCo V3
-mkdir mocov3
-cd mocov3
-gdown https://drive.google.com/uc?id=1QAuKJmegGCJrntAL80tfTrbi2fI4sPl-
-gdown https://drive.google.com/uc?id=1jtJEi66g5z7dBn0FDcSL7zoU4ArEEyqU
-cd ..
-
-# NNCLR
-mkdir nnclr
-cd nnclr
-gdown https://drive.google.com/uc?id=1aodwBlGK6EqrC_kthk8JcuxVcY4S5CF9
-gdown https://drive.google.com/uc?id=14Z8REvCrdW8eZ0kwxmNioIPneyCSAk0E
-cd ..
-
-# ReSSL
-mkdir ressl
-cd ressl
-gdown https://drive.google.com/uc?id=16sKNdpScv5FckpC02W41mjETXL6T5u2S
-gdown https://drive.google.com/uc?id=1niA588wO6KX1dcbhfelb_vumByHgDfVV
-cd ..
-
-# SimCLR
-mkdir simclr
-cd simclr
-gdown https://drive.google.com/uc?id=17YGC7y4yxkVAF8ZNezdtmN-uc70jz3zq
-gdown https://drive.google.com/uc?id=1bmrfJxEK505_ky0m7q7ZJSDpFfgqIuQ6
-cd ..
-
-# Simsiam
-mkdir simsiam
-cd simsiam
-gdown https://drive.google.com/uc?id=1DStn9PAEMJtzh1Mxb3NjfTtm5vaNgRM5
-gdown https://drive.google.com/uc?id=1y03EtFuMi5fZGPJZfN3hkONe99WsFBOJ
-cd ..
-
-# SupCon
-mkdir supcon
-cd supcon
-gdown https://drive.google.com/uc?id=1QhPHENtgYttIF1Dn1srA4dAkIiC_5P7W
-gdown https://drive.google.com/uc?id=1QsZs9TfWoycrHBBUrliWe-cqkGQ9epAD
-cd ..
-
-# SwAV
-mkdir swav
-cd swav
-gdown https://drive.google.com/uc?id=1oJzFfayNpcShK1bZtDK58HthcKY2bpns
-gdown https://drive.google.com/uc?id=14ed_7MG_pg-G_qjQcxVc8MUZWwFcz3mF
-cd ..
-
-# VIbCReg
-mkdir vibcreg
-cd vibcreg
-gdown https://drive.google.com/uc?id=1akNcewHzh4ideoQPWakaXWGDxfGoxkNu
-gdown https://drive.google.com/uc?id=1cdvZXUmmDptSe-RkYyiQXyREwthvMuxW
-cd ..
-
-# VICReg
-mkdir vicreg
-cd vicreg
-gdown https://drive.google.com/uc?id=1kH78BUBKprrsxL2KRKmorVQ9vJHsMsID
-gdown https://drive.google.com/uc?id=1TJk6G6KY1URPpruhKIDuovv66U-mnQHo
-cd ..
-
-# W-MSE
-mkdir wmse
-cd wmse
-gdown https://drive.google.com/uc?id=1_6EmYFqAW_U8DFv72KUaAe-BV8xkRxsp
-gdown https://drive.google.com/uc?id=1uIeg5EKEMefeBIyYFm9SBmChJPBc-0g_
-cd ..
diff --git a/solo-learn/zoo/imagenet.sh b/solo-learn/zoo/imagenet.sh
deleted file mode 100644
index a875703..0000000
--- a/solo-learn/zoo/imagenet.sh
+++ /dev/null
@@ -1,43 +0,0 @@
-mkdir trained_models
-cd trained_models
-mkdir imagenet
-cd imagenet
-
-# Barlow Twins
-mkdir barlow_twins
-cd barlow_twins
-gdown https://drive.google.com/uc?id=1GodHwmdMn9u76b5XFzEr5v59tOfwUOof
-gdown https://drive.google.com/uc?id=1EKdbR72-gtNE782254tjXi9UR2NiwEWh
-cd ..
-
-# BYOL
-mkdir byol
-cd byol
-gdown https://drive.google.com/uc?id=1TheL_4tmDWByCxg8XHke5VEz_lcYHH64
-gdown https://drive.google.com/uc?id=18gG0Jo59cFVX4qNUO119jIhzHcJkAmGz
-cd ..
-
-# MoCo V2+
-mkdir mocov2plus
-cd mocov2plus
-gdown https://drive.google.com/uc?id=1BBauwWTJV38BCf56KtOK9TJWLyjNH-mP
-gdown https://drive.google.com/uc?id=1JMpGSYjefFzxT5GTbEc_2d4THxOxC3Ca
-cd ..
-
-# MAE
-mkdir mae
-cd mae
-
-mkdir pretrain
-cd pretrain
-gdown https://drive.google.com/uc?id=1WfkMVNGrQB-NK12XPkcWWxJsFy1H_0TI
-gdown https://drive.google.com/uc?id=1EAeZy3lyr35wVcPBISKQXjHFxXtxA0DY
-cd ..
-
-mkdir finetune
-cd finetune
-gdown https://drive.google.com/uc?id=1buWWhf7zPJtpL3qOG_LRePfnwurjoJtw
-gdown https://drive.google.com/uc?id=1n6symLssKGolf_WQd5I1RS-Gj5e-go92
-cd ..
-
-cd ..
diff --git a/solo-learn/zoo/imagenet100.sh b/solo-learn/zoo/imagenet100.sh
deleted file mode 100644
index 44d632f..0000000
--- a/solo-learn/zoo/imagenet100.sh
+++ /dev/null
@@ -1,123 +0,0 @@
-mkdir trained_models
-cd trained_models
-mkdir imagenet100
-cd imagenet100
-
-# Barlow Twins
-mkdir barlow_twins
-cd barlow_twins
-gdown https://drive.google.com/uc?id=1C2qQSqp8cXvfrwHVG9MuGTPT2TOTsGla # checkpoint
-gdown https://drive.google.com/uc?id=1TY10aa97P4Fl7EgSjTy_u_QME9tkcU4r # args
-cd ..
-
-# BYOL
-mkdir byol
-cd byol
-gdown https://drive.google.com/uc?id=1cgJaSRr3HPZRNMwzYwwS5Vwtkna3LgGs # checkpoint
-gdown https://drive.google.com/uc?id=1EIluSRGaV0Ft1UQecGhpkFUCKVwMmtv9 # args
-cd ..
-
-# DeepCluster V2
-mkdir deepclusterv2
-cd deepclusterv2
-gdown https://drive.google.com/uc?id=1ANWOVMFMa-9eRWTKRGiUkNenJYD-McjT # checkpoint
-gdown https://drive.google.com/uc?id=18oOypleOOHQ7z9XL9zUTgDB7zpRdbMti # args
-cd ..
-
-# DINO
-mkdir dino
-cd dino
-gdown https://drive.google.com/uc?id=1MkuNjlIMqzuRwdG_K6NoDrGQH2GtssXV # checkpoint
-gdown https://drive.google.com/uc?id=1MlYaqPsp_pEaDR7nDRbxv3oOsMTVBHg9 # args
-cd ..
-
-# DINO (vit tiny)
-mkdir dino-vit
-cd dino-vit
-gdown https://drive.google.com/uc?id=11rHOKD4EQB2AJ1C2tLHz0pjai6MqwT9v # checkpoint
-gdown https://drive.google.com/uc?id=15pQbMd0xiLZNsmozBmsKA3_HVEdpSqmy # args
-cd ..
-
-# MoCo V2+
-mkdir mocov2plus
-cd mocov2plus
-gdown https://drive.google.com/uc?id=1aXGypKbIqV8BqtVOzpk2lRJWXb--XejO # checkpoint
-gdown https://drive.google.com/uc?id=1s5rzHSqAMRKaUR4ZP3HWbCm8QLXU6JQ8 # args
-cd ..
-
-# MoCo V3
-mkdir mocov3
-cd mocov3
-gdown https://drive.google.com/uc?id=1cUaAdx-6NXCkeSMo-mQtpPnYk7zA4Gg4 # checkpoint
-gdown https://drive.google.com/uc?id=1mb6ZRKF1CdGP0rdJI2yjyStZ-FCFjsi4 # args
-cd ..
-
-# MoCo V3 R50
-mkdir mocov3-r50
-cd mocov3-r50
-gdown https://drive.google.com/uc?id=1KiwHisYRmzYjLYDm1zQxZlUKe8BkI2i8 # checkpoint
-gdown https://drive.google.com/uc?id=16pix6gNybXnssMpXlzjKnMWl9lRfdv20 # args
-cd ..
-
-# NNCLR
-mkdir nnclr
-cd nnclr
-gdown https://drive.google.com/uc?id=1rj9-YBUNX0wHVLjQuksOOubfEZJsrsjF # checkpoint
-gdown https://drive.google.com/uc?id=1GBT6-QkhuDLexfVgwM0SWbzuXJ6QrF9o # args
-cd ..
-
-# ReSSL
-mkdir ressl
-cd ressl
-gdown https://drive.google.com/uc?id=1AH3hFcakrGKXzxmzO2LBHWjk-Mgu5PUN # checkpoint
-gdown https://drive.google.com/uc?id=1XWKERLv_YgFQ_Oy33TD8DhTfH9qSoVQa # args
-cd ..
-
-# SimCLR
-mkdir simclr
-cd simclr
-gdown https://drive.google.com/uc?id=1dU88Sh5F_8J_UXXEQ8FOWS85g8eFEZVa # checkpoint
-gdown https://drive.google.com/uc?id=1865vcQhuvGeNm0iQ9g87APLwLvYNuqcn # args
-cd ..
-
-# Simsiam
-mkdir simsiam
-cd simsiam
-gdown https://drive.google.com/uc?id=1cwAyDCpU36zmQ6-r4Ww7YqiZ41vbjckQ # checkpoint
-gdown https://drive.google.com/uc?id=1EU43HZKrLu_ZTV3CVAkjkFS6HORhtmR9 # args
-cd ..
-
-# SupCon
-mkdir supcon
-cd supcon
-gdown https://drive.google.com/uc?id=1-NRvw7J9WrQKBvDhmuirQmTklMlQasxI # checkpoint
-gdown https://drive.google.com/uc?id=1IKTW20UTWlHSO4RsgO1QxakYs5ZscY26 # args
-cd ..
-
-# SwAV
-mkdir swav
-cd swav
-gdown https://drive.google.com/uc?id=1nDiXHb8ce6_qDyZ8EcqDXi6ptI4A_t6B # checkpoint
-gdown https://drive.google.com/uc?id=1h1-YEqEw5Zj7wl0Gkxiz6WC3qpwa2FgL # args
-cd ..
-
-# VIbCReg
-mkdir vibcreg
-cd vibcreg
-gdown https://drive.google.com/uc?id=1VDUvp0zghvnUgwhWS-s7PuCA1KTPEPPX # checkpoint
-gdown https://drive.google.com/uc?id=14rEyW3cZyUxctjLQunIjyuJMI3DbUQ-b # args
-cd ..
-
-# VICReg
-mkdir vicreg
-cd vicreg
-gdown https://drive.google.com/uc?id=1yAxL-NTOYN6kGi2VtPeo7cPKFxXFKYyP # checkpoint
-gdown https://drive.google.com/uc?id=1A5QaOlUGaId3qECmQusoPDiYx8tjfKDz # args
-cd ..
-
-# W-MSE
-mkdir wmse
-cd wmse
-gdown https://drive.google.com/uc?id=1yYhOsIpbHqJGhqlbMTYwBxMOJkz7rSwo # checkpoint
-gdown https://drive.google.com/uc?id=1Q88g4Rtz_k4FR9QvXwFqAZ-kYuL4dXl- # args
-cd ..