-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update context model code #296
base: main
Are you sure you want to change the base?
Conversation
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Things still to do:
|
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #296 +/- ##
==========================================
+ Coverage 59.58% 61.60% +2.01%
==========================================
Files 35 34 -1
Lines 6164 5949 -215
==========================================
- Hits 3673 3665 -8
+ Misses 2491 2284 -207
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Current workflow is:
|
Summary
The current version of MapReader contains unused/untested code for using the patch context when training the model. From the 2021 paper:
"Context-aware patchwork method.
A limitation of the patchwork method is that it ignores neighboring patches when training or performing model inference. To capture the strong, spatial dependencies between neighboring patches (which will be common in maps for many types of labels [16]), MapReader supports building model ensembles as shown in Fig. 8. Model-1 and Model-2 are two CV models with neural network architectures defined by the user. These models can have different or similar architectures. For exam- ple, in one of our experiments, Model-2 was a pre-trained model on our patch dataset while Model-1 was a pre-trained model from PyTorch Image Models [40] with a different architecture. As shown in Fig. 8, a patch and its context image are fed into Model-2 and Model-1, respectively. In practice, the user only specifies the size of the context image, and MapReader extracts and preprocesses the context image from the dataset. Model-1 and Model-2 generate vector representations for the input images, V1 and V2. The size of these vectors are defined by the user and a combination of the two (e.g., by concatenation) is then fed into Model-3 for prediction. Such model ensembles can be an efficient approach to achieving high- performing CV models [37]."
This PR updates this code to be more simple/user-friendly.
Fixes #287
Addresses #17
Describe your changes
PatchDataset
andPatchContextDataset
return image(s) as tuples. These datasets now return(img,)
and(img, context_img)
respectively.context_dataset
arg when creating datasets to enable creating of 'PatchContextDataset' type datasets from the annotations loader.twoParrellelModels
class to clarify patch/context branches of the modelgenerate_layerwise_lrs
method for context datasetsChecklist before assigning a reviewer (update as needed)
Reviewer checklist
Please add anything you want reviewers to specifically focus/comment on.