Skip to content
This repository has been archived by the owner on Nov 1, 2024. It is now read-only.

Latest commit

 

History

History
605 lines (584 loc) · 24.1 KB

MODEL_ZOO.md

File metadata and controls

605 lines (584 loc) · 24.1 KB

Model Zoo

Introduction

This file documents a collection of baselines trained with pycls, primarily for the Designing Network Design Spaces paper. All configurations for these baselines are located in the configs/dds_baselines directory. The tables below provide results and useful statistics about training and inference. Links to the pretrained models are provided as well. The following experimental and training settings are used for all of the training and inference runs.

Experimental Settings

  • All baselines were run on Big Basin servers with 8 NVIDIA Tesla V100 GPUs (16GB GPU memory).
  • All baselines were run using PyTorch 1.6, CUDA 10.0, and cuDNN 7.6.
  • Inference times are reported for processing 64 images on 1 GPU for all models.
  • Training times are reported for 100 epochs on 8 GPUs with the batch size listed.
  • Each model was trained 5 times to obtain robust estimates.
  • The provided models are from the runs with errors closest to the average of 5 runs.
  • All models and results below are on the ImageNet-1k dataset.
  • The model id column is provided for ease of reference.

Reproducibility notes

  • The source data for the tables below can be found in model_complexity.json, model_error.json, model_timing.json.
  • This source data is also used for unit tests on the models and helps ensure correctness and reproducibility.
  • The exact code for generating the source data can be found here: test_models.py.
  • The tables below are automatically generated by model_zoo_tables.py.
  • The timings below do not match the paper exactly due to the minor changes in the timing methodology.
  • Most notably, the ResNe(X)t timings were faster in the paper due to use of larger batch sizes.

Training Settings

Our primary goal is to provide simple and strong baselines that are easy to reproduce. For all models, we use our basic training settings without any training enhancements (e.g., DropOut, DropConnect, AutoAugment, EMA, etc.) or testing enhancements (e.g., multi-crop, multi-scale, flipping, etc.); please see our Designing Network Design Spaces paper for more information.

  • We use SGD with momentum of 0.9, a half-period cosine schedule, and train for 100 epochs.
  • For ResNet/ResNeXt/RegNet, we use a reference learning rate of 0.1 and a weight decay of 5e-5 (see Figure 21).
  • For EfficientNet, we use a reference learning rate of 0.2 and a weight decay of 1e-5 (see Figure 22).
  • The actual learning rate for each model is computed as (batch-size / 128) * reference-lr.
  • For training, we use aspect ratio, flipping, PCA, and per-channel mean and SD normalization.
  • At test time, we rescale images to (256 / 224) * train-res and take the center crop of train-res.
  • For ResNet/ResNeXt/RegNet, we use the image size of 224x224 for training.
  • For EfficientNet, the training image size varies following the original paper.

For 8 GPU training, we apply 5 epoch gradual warmup, following the ImageNet in 1 Hour paper. Note that the learning rate scaling rule described above is similar to the one from the ImageNet in 1 Hour paper but the number of images per GPU varies among models. To understand how the configs are adjusted, please see the examples in the configs/lr_scaling directory (coming soon).

Baselines

RegNetX Models

model flops
(B)
params
(M)
acts
(M)
batch
size
infer
(ms)
train
(hr)
error
(top-1)
model id download
RegNetX-200MF 0.2 2.7 2.2 1024 11 3.0 31.1 160905981 model
RegNetX-400MF 0.4 5.2 3.1 1024 15 4.2 27.4 160905967 model
RegNetX-600MF 0.6 6.2 4.0 1024 18 4.5 25.9 160906442 model
RegNetX-800MF 0.8 7.3 5.1 1024 23 5.9 24.8 160906036 model
RegNetX-1.6GF 1.6 9.2 7.9 1024 35 8.9 23.0 160990626 model
RegNetX-3.2GF 3.2 15.3 11.4 512 59 15.1 21.7 160906139 model
RegNetX-4.0GF 4.0 22.1 12.2 512 74 17.9 21.4 160906383 model
RegNetX-6.4GF 6.5 26.2 16.4 512 95 24.3 20.8 161116590 model
RegNetX-8.0GF 8.0 39.6 14.1 512 97 23.7 20.7 161107726 model
RegNetX-12GF 12.1 46.1 21.4 512 142 34.6 20.3 160906020 model
RegNetX-16GF 15.9 54.3 25.5 512 171 41.3 20.0 158460855 model
RegNetX-32GF 31.7 107.8 36.3 256 337 81.8 19.5 158188473 model

RegNetY Models

model flops
(B)
params
(M)
acts
(M)
batch
size
infer
(ms)
train
(hr)
error
(top-1)
model id download
RegNetY-200MF 0.2 3.2 2.2 1024 12 3.3 29.7 176245422 model
RegNetY-400MF 0.4 4.3 3.9 1024 19 5.3 25.9 160906449 model
RegNetY-600MF 0.6 6.1 4.3 1024 20 5.3 24.5 160981443 model
RegNetY-800MF 0.8 6.3 5.2 1024 23 6.1 23.7 160906567 model
RegNetY-1.6GF 1.6 11.2 8.0 1024 41 10.3 22.1 160906681 model
RegNetY-3.2GF 3.2 19.4 11.3 512 69 17.0 21.1 160906834 model
RegNetY-4.0GF 4.0 20.6 12.3 512 69 17.5 20.6 160906838 model
RegNetY-6.4GF 6.4 30.6 16.4 512 108 27.2 20.1 160907112 model
RegNetY-8.0GF 8.0 39.2 18.0 512 115 29.3 20.1 161160905 model
RegNetY-12GF 12.1 51.8 21.4 512 152 36.9 19.7 160907100 model
RegNetY-16GF 15.9 83.6 23.0 256 208 50.8 19.6 161303400 model
RegNetY-32GF 32.3 145.0 30.3 256 331 79.4 19.1 161277763 model

ResNet Models

model flops
(B)
params
(M)
acts
(M)
batch
size
infer
(ms)
train
(hr)
error
(top-1)
model id download
ResNet-50 4.1 25.6 11.3 256 57 14.8 23.2 161235311 model
ResNet-101 7.8 44.5 16.4 256 97 24.0 21.4 161167170 model
ResNet-152 11.5 60.2 22.8 256 140 33.7 20.9 161167467 model

ResNeXt Models

model flops
(B)
params
(M)
acts
(M)
batch
size
infer
(ms)
train
(hr)
error
(top-1)
model id download
ResNeXt-50 4.2 25.0 14.6 256 85 22.1 21.9 161167411 model
ResNeXt-101 8.0 44.2 21.4 256 151 39.9 20.7 161167590 model
ResNeXt-152 11.7 60.0 29.9 256 231 57.6 20.4 162471172 model

EfficientNet Models

model flops
(B)
params
(M)
acts
(M)
batch
size
infer
(ms)
train
(hr)
error
(top-1)
model id download
EfficientNet-B0 0.4 5.3 6.7 256 31 8.8 24.9 161305613 model
EfficientNet-B1 0.7 7.8 10.9 256 47 13.2 24.1 161304979 model
EfficientNet-B2 1.0 9.1 13.8 256 58 16.4 23.5 161305015 model
EfficientNet-B3 1.8 12.2 23.8 256 92 26.4 22.5 161305060 model
EfficientNet-B4 4.4 19.3 49.5 128 201 57.1 21.4 161305098 model
EfficientNet-B5 10.3 30.4 98.9 64 446 122.6 21.7 161305138 model