You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Nelson, i hope this message finds you well. I'm curious on what is your take on this.
I am very satisfied with results i obtain with pad_maps = True and padding=False.
However, as you know, pad_maps creates a "frame" that pads the resulting anomaly map and it seems that anomalies contained in the padded region are not seen by the model.
Playing with these two pad_maps and padding parameters i realized that, regardless of pad_maps value, it's padding that determines if the padded region is ignored or not. So, if i set padding=True the model also sees anomalies close to the borders of the image. Also, if I set padding=True and pad_maps=False I obtain precise segmentations and i also see defects close to borders. To me it seems that pad_maps is needed only to correct the segmentations "translation" that is caused by padding=False.
So why i'm writing this message and i'm not simply using padding=True and pad_maps=False?
The problem is that, regardless of the pad_maps (so we can forget about it, because it's getting confusing 😁 ), padding=True significantly worsen the results. I mean, it seems that the model learns less things.
So the first thing that came to my mind is that maybe the Teacher has been pretrained with padding=False, so it maybe has a different architecture with respect to the Student with padding=True that i'm trying to use.
But based on the code in this repo, it seems you pretrained the Teacher using padding=True and then you use a Student with padding=False when you do the efficientad training. Is this true?
This image shows what i obtain with padding=True and pad_maps=False . It segments perfectly the fake defects i am using to do these tests. Unfortunately the overall performance on the real defects in the rest of the test test is significantly worse wrt the model trained with padding=False (you can see it also in this image: the model struggles to segment the real defect on the bottom).
What do you think? What could be the reason why padding=True jeopardizes my trainings?
The text was updated successfully, but these errors were encountered:
Hi Nelson, i hope this message finds you well. I'm curious on what is your take on this.
I am very satisfied with results i obtain with pad_maps = True and padding=False.
However, as you know, pad_maps creates a "frame" that pads the resulting anomaly map and it seems that anomalies contained in the padded region are not seen by the model.
Playing with these two pad_maps and padding parameters i realized that, regardless of pad_maps value, it's padding that determines if the padded region is ignored or not. So, if i set padding=True the model also sees anomalies close to the borders of the image. Also, if I set padding=True and pad_maps=False I obtain precise segmentations and i also see defects close to borders. To me it seems that pad_maps is needed only to correct the segmentations "translation" that is caused by padding=False.
So why i'm writing this message and i'm not simply using padding=True and pad_maps=False?
The problem is that, regardless of the pad_maps (so we can forget about it, because it's getting confusing 😁 ), padding=True significantly worsen the results. I mean, it seems that the model learns less things.
So the first thing that came to my mind is that maybe the Teacher has been pretrained with padding=False, so it maybe has a different architecture with respect to the Student with padding=True that i'm trying to use.
But based on the code in this repo, it seems you pretrained the Teacher using padding=True and then you use a Student with padding=False when you do the efficientad training. Is this true?
This image shows what i obtain with padding=True and pad_maps=False . It segments perfectly the fake defects i am using to do these tests. Unfortunately the overall performance on the real defects in the rest of the test test is significantly worse wrt the model trained with padding=False (you can see it also in this image: the model struggles to segment the real defect on the bottom).
What do you think? What could be the reason why padding=True jeopardizes my trainings?
The text was updated successfully, but these errors were encountered: