Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/rd++ #2386

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Conversation

cjy513203427
Copy link
Contributor

@cjy513203427 cjy513203427 commented Oct 22, 2024

πŸ“ Description

  • Provide a clear summary of the changes and the issue that has been addressed.
  • πŸ› οΈ Fixes # (issue number)

✨ Changes

Select what type of change your PR is:

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • πŸ”¨ Refactor (non-breaking change which refactors the code base)
  • πŸš€ New feature (non-breaking change which adds functionality)
  • πŸ’₯ Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • πŸ“š Documentation update
  • πŸ”’ Security update

βœ… Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • [ x ] πŸ“‹ I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
  • πŸ“š I have made the necessary updates to the documentation (if applicable).
  • [ x ] πŸ§ͺ I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).

For more information about code review checklists, see the Code Review Checklist.

I re-implemented part of the RD++ algorithm. It is based on reverse distillation. I added self-supervised optimal transport loss, reconstruction loss, contrast loss and Multiscale projection layers based onπŸ“„ Paper
πŸ§‘β€πŸ’» Code

Some things are missing: The authors use a customised dataloader and noise.py for MVTEC dataset. However, I don't find any noise definition in Anomalib. Perlin noise is something else. Should I add a noise function in mvtec.py which could affect other functions or create a customised dataloader for RD++?

@cjy513203427
Copy link
Contributor Author

cjy513203427 commented Oct 22, 2024

This is test code (I used the recommended epochs from the authors.):

import logging
from anomalib import TaskType
from anomalib.data import MVTec
from anomalib.engine import Engine
from anomalib.models import RevisitingReverseDistillation
from lightning.pytorch.callbacks import EarlyStopping, ModelCheckpoint

# configure logger
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# Define the number of epochs for each category
epoch_mapping = {
    'carpet': 10,
    'leather': 10,
    'grid': 260,
    'tile': 260,
    'wood': 100,
    'cable': 240,
    'capsule': 300,
    'hazelnut': 160,
    'metal_nut': 160,
    'screw': 280,
    'toothbrush': 280,
    'transistor': 300,
    'zipper': 300,
    'pill': 200,
    'bottle': 200,
}

# datasets = ['screw', 'pill', 'capsule', 'carpet', 'grid', 'tile', 'wood', 'zipper', 'cable', 'toothbrush', 'transistor',
#             'metal_nut', 'bottle', 'hazelnut', 'leather']
# datasets = ['carpet']
datasets = ['bottle', 'hazelnut', 'leather']

for dataset in datasets:
    logger.info(f"================== Processing dataset: {dataset} ==================")
    task = TaskType.SEGMENTATION
    datamodule = MVTec(
        root="../datasets/MVTec",
        category=dataset,
        image_size=256,
        train_batch_size=32,
        eval_batch_size=32,
        num_workers=0,
        task=task,
    )

    model = RevisitingReverseDistillation()

    callbacks = [
        ModelCheckpoint(
            mode="max",
            monitor="pixel_AUROC",
        ),
        EarlyStopping(
            monitor="pixel_AUROC",
            mode="max",
            patience=3,
        ),
    ]

    # Get the number of epochs for the current dataset
    num_epochs = epoch_mapping.get(dataset, 100)  # Default to 100 if not found

    logger.info(f"Using {num_epochs} epochs for dataset: {dataset}")

    engine = Engine(
        max_epochs=num_epochs,
        check_val_every_n_epoch=3,
        callbacks=callbacks,
        pixel_metrics=["AUROC", "PRO"], image_metrics=["AUROC", "PRO"],
        accelerator="auto",  # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">,
        devices=1,
        logger=False,
    )

    logger.info(f"================== Start training for dataset: {dataset} ==================")
    engine.fit(datamodule=datamodule, model=model)

    logger.info(f"================== Start testing for dataset: {dataset} ==================")
    engine.test(datamodule=datamodule, model=model)

This is the result compared to RD:

Screw Pill Capsule Carpet Grid Tile Wood Zipper Cable Toothbrush Transistor Metal Nut Bottle Hazelnut Leather
RD 98.03 97.63 97.93 99.36 95.49 100.00 99.39 97.16 95.45 91.39 97.87 100.00 100.00 100.00 100.00
RD++ v0 94.71 94.44 90.65 99.80 90.98 99.96 99.39 85.95 94.79 92.22 97.29 99.22 100 100 100

cjy513203427 and others added 6 commits October 22, 2024 14:35
…ic skeletal structure of the code.

Signed-off-by: Jinyao Chen <cjy513203427@gmail.com>
Signed-off-by: Jinyao Chen <cjy513203427@gmail.com>
Signed-off-by: Jinyao Chen <cjy513203427@gmail.com>
* Add datumaro annotation dataloader

Signed-off-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Update changelog

Signed-off-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

* Add examples

Signed-off-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

---------

Signed-off-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>
* add notebook 701e_aupimo_advanced_iv on load/save and statistical comparisons

Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>

* make `AUPIMOResult.num_thresholds` optional

Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>

* add aupimo notebook advanced iv (load/save and statistical tests)

Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>

* simplify cite us and mention intal

Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>

* fix readme

Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>

---------

Signed-off-by: jpcbertoldo <24547377+jpcbertoldo@users.noreply.github.com>
Co-authored-by: Samet Akcay <samet.akcay@intel.com>
Signed-off-by: Jinyao Chen <cjy513203427@gmail.com>
Copy link

Check out this pull request onΒ  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@samet-akcay
Copy link
Contributor

@cjy513203427 thanks a lot for your contribution!

regarding the noise, it could be a torchvision transform such as SimplexNoise, and could be added as a train_transform from DataModule or from the model. Since this is a model specific transform, I think it might make more sense to define this as a model specific transform such as done here:

def configure_transforms(image_size: tuple[int, int] | None = None) -> Transform:
"""Default transform for Padim."""
image_size = image_size or (256, 256)
# scale center crop size proportional to image size
height, width = image_size
center_crop_size = (int(height * (224 / 256)), int(width * (224 / 256)))
return Compose(
[
Resize(image_size, antialias=True),
CenterCrop(center_crop_size),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
],
)

@haimat
Copy link

haimat commented Nov 7, 2024

Hey guys, do you have any ETA on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants