[Bug]: Training of anomalib model on custom dataset is taking too long! #2311
-
Describe the bugI am trying to train a anomalib model on my custom dataset, but its taking too long to train (even after 3 days there were no results). I am using the same code as provided in the anomalib docs: from anomalib.data import Folder Create the datamoduledatamodule = Folder( Setup the datamoduledatamodule.setup() Create the model and enginemodel = Patchcore() Train a Patchcore model on the given datamoduleengine.train(datamodule=datamodule, model=model) Output screen (Its just stuck at this): ┌───┬───────────────────────┬────────── DatasetCustom Dataset ModelPatchCore Steps to reproduce the behavior
OS informationOS information:
Expected behaviorThe model should get trained ScreenshotsNo response Pip/GitHubpip What version/branch did you use?No response Configuration YAML# Import the datamodule
from anomalib.data import Folder
# Create the datamodule
datamodule = Folder(
name="hazelnut_toy",
root="datasets/hazelnut_toy",
normal_dir="good",
abnormal_dir="crack",
task="classification",
)
# Setup the datamodule
datamodule.setup() Logs┌───┬───────────────────────┬──────────
│ │ Name │ Type │ Params │ Mode │
├───┼───────────────────────┼───────────
│ 0 │ model │ PatchcoreModel │ 643 K │ train │
│ 1 │ _transform │ Compose │ 0 │ train │
│ 2 │ normalization_metrics │ MetricCollection │ 0 │ train │
│ 3 │ image_threshold │ F1AdaptiveThreshold │ 0 │ train │
│ 4 │ pixel_threshold │ F1AdaptiveThreshold │ 0 │ train │
│ 5 │ image_metrics │ AnomalibMetricCollection │ 0 │ train │
│ 6 │ pixel_metrics │ AnomalibMetricCollection │ 0 │ train │
└───┴───────────────────────┴─────────────
Trainable params: 643 K
Non-trainable params: 0
Total params: 643 K
Total estimated model params size (MB): 2
Modules in train mode: 15
Modules in eval mode: 46 Code of Conduct
|
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
Hello, how big is your dataset and which resolution images are? Both these factors will affect time of training. |
Beta Was this translation helpful? Give feedback.
-
I have a total of 90 images in my dataset (900x900 resolution) |
Beta Was this translation helpful? Give feedback.
-
Can you try if it works with 256x256? Maybe there is some different problem, especially if the output screen is stuck. |
Beta Was this translation helpful? Give feedback.
-
@UTKARSH-VISCON, I don't think it is an Anomalib problem. Patchcore is computationally expensive, requiring too much memory, especially during the coreset sampling. As @abc-125 suggested, you could try to reduce the image size to see if it helps a bit. |
Beta Was this translation helpful? Give feedback.
-
Hi, Not sure if this will help.
My model train pretty fast with patchcore, dfm and fastflow |
Beta Was this translation helpful? Give feedback.
Can you try if it works with 256x256? Maybe there is some different problem, especially if the output screen is stuck.