-
Notifications
You must be signed in to change notification settings - Fork 685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🚀 Add PreProcessor
to AnomalyModule
#2358
🚀 Add PreProcessor
to AnomalyModule
#2358
Conversation
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
A sub-feature request that would fit here: (optionally?) keep both the transformed and original image/mask in the batch. So instead of image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
batch.update(image=image, gt_mask=gt_mask) something like batch.update(image_original=batch.image, gt_mask_original=batch.gt_mask)
image, gt_mask = self.XXX_transform(batch.image, batch.gt_mask)
batch.update(image=image, gt_mask=gt_mask) It's quite practical to have these when using the API (i've re-implemented this in my local copy 100 times haha). |
yeah, the idea is to keep |
exactly, makes sense : ) but it's also useful to be able to access the transformed one (eg. when using augmentations)
didnt get this. cause it's not backcompatible? |
oh I meant, it is currently not working, I need to fix it :) |
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
…ssor Signed-off-by: Samet Akcay <samet.akcay@intel.com>
…oolkit/anomalib into add-pre-processor
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I have a few minor comments
@@ -220,30 +250,12 @@ def input_size(self) -> tuple[int, int] | None: | |||
The effective input size is the size of the input tensor after the transform has been applied. If the transform | |||
is not set, or if the transform does not change the shape of the input tensor, this method will return None. | |||
""" | |||
transform = self.transform or self.configure_transforms() | |||
transform = self.pre_processor.train_transform |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a check to ascertain whether train_transform
is present? Models like VlmAD might not have train_transforms passed to them. I feel it should pick up val or pred transform is train is not available.
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I think we could merge this
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Samet Akcay <samet.akcay@intel.com>
📝 Description
The
PreProcessor
class serves as both a PyTorch module and a Lightning callback, handling transforms during different stages of training, validation, testing and prediction. This PR demonstrates how to create and use custom pre-processors.Key Components
The pre-processor functionality is implemented in:
And used by the base
AnomalyModule
in:Usage Examples
1. Using Default Pre-Processor
The simplest way is to use the default pre-processor which resizes images to 256x256 and normalizes using ImageNet statistics:
2. Custom Pre-Processor with Different Transforms
Create a pre-processor with custom transforms for different stages:
3. Disable Pre-Processing
To disable pre-processing entirely:
4. Override Default Pre-Processor in Custom Model
Custom models can override the default pre-processor configuration:
Notes
Testing
✨ Changes
Select what type of change your PR is:
✅ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.