-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate a report #15
Comments
Hi, thank you for your interest to this work. Sorry that this project does not have the inference-only script. In my view, the simplest way to generate the report in your custom dataset is to follow the same annotation format as the annotation file and leave the report annotation be an empty string. Then, you can comment the relevant training code in the trainer and only keep the valid part. After that, you can use the same training script with only one epoch and save the output from the beamsearch. In sum, you need to modify the code a bit. Hope this can help you figure out the problem. |
For inference, vocabulary and labels are also required. What to do about them? Thanks in advance. |
Hi, thank you for your interest to our work. The vocabulary is required to determine the words that can be generated. The labels are used to perform the prototype-based cross-modal query and responding (see the details in our paper). Hope this can solve your confusion. |
Thank you for the quick reply, maybe my question was not clear so let me explain a bit more. I am trying to make an inference script so that if I give an image as an input, the model will return a generated report. So I wanted to know that in case of inference, what should be the vocabulary and labels? |
Hi, the vocabulary should be the one from the training set (the dataset used to train the model you load), e.g., training corpus of MIMIC-CXR. The labels can be obtained by utilizing the chexpert /chexbertlabeller if you have the report, or you can train a classification network based on the training set of MIMIC-CXR and Chexpert dataset on 14 diseases and then predict the labels purely based on the image. |
I stored the vocabulary generated in the training phase as a JSON file, and I retrieve it during inference. As for the labels, I will have to train a classification network, as there are no ground truth reports available. Your assistance has been greatly appreciated. |
Never mind, I will provide a seperate test file later to support quick inference. |
Dear author, I apologize for disturbing you, but I sincerely believe that this is a great piece of work.At present, I can run the project, and the results are quite satisfactory, but I would like to know how to insert an image to generate its corresponding report. Do I need to write another module myself? I look forward to your answer. Thank you very much.
The text was updated successfully, but these errors were encountered: