-
Hello, Apologies if this has already been covered elsewhere as it seems to be a rather basic question. I am wondering why the torch sizes of val_outputs and val_labels fluctuate; specifically, while the batch size and number of channels remain constant, the length, width, and depth of images seems to fluctuate. If it were constantly decreasing, I would think that this is perhaps due to stride / convolutions, but this does not seem to be the case. For context, I am trying my own dataset on the spleen segmentation UNet, but am encountering an indexing error that is not consistently reproducible. In contrast to the spleen data, I noticed that the torch sizes of the val_outputs, val_labels are the same as my images, and thought that this might be part of the issue. Any advice etc. would be appreciated. Thank you!
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, the fluctuations that you are seeing are caused by the transforms applied to the images/labels. In this particular example, it is Having images of different sizes compared to each other is fine, but will cause an error if you have batch size > 1, since the tensors will be concatenated together, which requires them to be the same size. For this reason, you'll notice that in the spleen tutorial, training batch_size > 1 (because they're all If you want to increase the batch size for your validation data, then you could pad them all to be the same size, which might be
|
Beta Was this translation helpful? Give feedback.
Hi, the fluctuations that you are seeing are caused by the transforms applied to the images/labels. In this particular example, it is
CropForegroundd
, which crops the image as small as possible without cropping any non-zero voxels. This depends on the image, and is the reason why you see these differences. You wouldn't see the fluctuations in the training data, however, because the transformation followingCropForegroundd
isRandCropByPosNegLabeld
, which is creating 4 examples, each of size(96,96,96)
.Having images of different sizes compared to each other is fine, but will cause an error if you have batch size > 1, since the tensors will be concatenated together, which requires them to …