Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The issue of poor performance in the deblurring task with a custom dataset. #40

Open
ziFan99 opened this issue Oct 21, 2024 · 2 comments

Comments

@ziFan99
Copy link

ziFan99 commented Oct 21, 2024

I would like to ask about my current task, which involves recognizing blurred shipping labels. The dataset I am using is self-generated using a blur generation algorithm on clear shipping labels. However, after training with your model, the results have not been satisfactory. I would like to know if your model is suitable for such a scenario (i.e., document blur, where details such as text and numbers on the shipping labels are missing), or could it be that my dataset is too small? Could you also share what the dataset size was in your deblurring tasks? I look forward to your reply.

@Royalvice
Copy link
Owner

Royalvice commented Oct 21, 2024 via email

@ziFan99
Copy link
Author

ziFan99 commented Oct 21, 2024

Hello, you're absolutely right. I used a blur generation method to create the training dataset, but for inference, I used real blurred images. I also believe that the blur kernel of the real blurred images is different from the one used in the training dataset, which is why the model performs poorly in real-world blurred scenarios. I would like to ask if you have any solutions for this issue. Are the training datasets you used collected from real-world blurred images? How do you align them with the clear images? I look forward to your reply. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants