You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With baseline CNNs with no anti-aliasing, we see better shift consistency if we increase the CNN's depth, e.g. VGG11 -> VGG19, Resnet18 -> Resnet152. Why is that so?
The text was updated successfully, but these errors were encountered:
Good question. Higher accuracy naturally lends itself to better shift consistency. A classifier that gets 100% accuracy will be consistent across shifts, even if has no shift-invariant inductive bias to begin with
Suppose we use an ensemble of very deep CNNs as a teacher and do knowledge distillation to a small student CNN e.g. MobileNetv2. All CNNs in here are not anti-aliased CNNs. Do you think such a student model will have shift consistency?
Same situation as in (1) but CNNs in the ensemble are anti-aliased versions, but student CNN isn't (it is again default MobileNetv2). What do you think about shift consistency of a student model?
With baseline CNNs with no anti-aliasing, we see better shift consistency if we increase the CNN's depth, e.g. VGG11 -> VGG19, Resnet18 -> Resnet152. Why is that so?
The text was updated successfully, but these errors were encountered: