Replies: 1 comment
-
@rwightman waiting for your response |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
@rwightman thank you for providing so many pre-trained backbone networks.
I am using resnet34 as a feature extractor with an image size of 256x256:
model = timm.create_model('resnet34', pretrained=True, features_only=True)
It gives the following blocks:
Feature from block 0: torch.Size([1, 64, 128, 128])
Feature from block 1: torch.Size([1, 64, 64, 64])
Feature from block 2: torch.Size([1, 128, 32, 32])
Feature from block 3: torch.Size([1, 256, 16, 16])
Feature from block 4: torch.Size([1, 512, 8, 8])
How can I apply ScSE attention modules after each block as shown in the below architecture and pass it to the decoder?
Beta Was this translation helpful? Give feedback.
All reactions