-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensor is not a torch image #16
Comments
The error description is pretty short. Can you please provide some further information, i.e., environment ( |
I created the rgbd_segmentation environment and prepared sunrgbd dataset. Then run inference_sample.py python inference_samples.py --dataset sunrgbd --ckpt_path ./trained_models/sunrgbd/r34_NBt1D.pth --depth_scale 1 --raw_depth |
Are you able to run inference_sample.py with the provided samples? Are your images successfully read? What is the datatype and the shape of the images before line 73 when the error is thrown? |
I can run inference_sample.py with the provided samples. Is it related to bit depth?
I just change the inference_samples.py line59,60
run: python inference_samples.py --dataset sunrgbd --ckpt_path ./trained_models/sunrgbd/r34_NBt1D.pth --depth_scale 1 --raw_depth
Then ,there report errors
Loaded checkpoint from ./trained_models/sunrgbd/r34_NBt1D.pth
Traceback (most recent call last):
File "inference_samples.py", line 73, in <module>
sample = preprocessor({'image': img_rgb, 'depth': img_depth})
File "/home/admin/.conda/envs/rgbd_segmentation/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 70, in __call__
img = t(img)
File "/data/nas/workspace/jupyter/bisenetv2/ESANet-main/src/preprocessing.py", line 198, in __call__
mean=self._depth_mean, std=self._depth_std)(depth)
File "/home/admin/.conda/envs/rgbd_segmentation/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 175, in __call__
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/home/admin/.conda/envs/rgbd_segmentation/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 209, in normalize
raise TypeError('tensor is not a torch image.')
TypeError: tensor is not a torch image.
The experience exchange paste said that the order of functions caused this error. I referred to this method, but it didn't work.
How can i test my data?Thank you!
|
If you are able to run |
Beyond that, as already mentioned by Mona, we need the dtypes and shapes for both images at this line for further debugging. |
The provided image_rgb shape (424,512,3) dtype uint8 image_depth(424,512) dtype float32
my image_rgb shape(424,512,3) dtype uint8 image_depth (424,512,3) dtype float32
Is that the problem?Thank you!
These are my images.
|
The problem is related to your depth image - is not a common depth image with depth values encoded in one channel as yours has three channels. It is more like another RGB images with gray values encoding the depth. You should check the depth image. |
OK,thank you very much!
|
I get the result .Thanks for your help!
Now,there have a new question.How can I output semantic information corresponding to different color regions?
|
What do you mean with "different color regions"? |
For example ,the orange area refers to the "table",How can i output the information "table"?
|
Before coloring (https://github.com/TUI-NICR/ESANet/blob/main/inference_samples.py#L87), the segmentation contains integers. Each integer refers to one category. For each category there exists a color and a class name as defined here. If you only need the regions for category "table" you can filter the segmentation by the respective integer value. |
I too faced the same issue as third dimension seems to be not encoded properly...so I did some manipulation and it worked |
Hello,I have a new problem. I want to test this model on my samples . I have got the rgb images and depth images .But i can not run the inference_samples.py normally .There report 'tensor is not a torch image' . Can you help me? Thank you ~
The text was updated successfully, but these errors were encountered: