You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running into the following RuntimeError when running the adanet_tpu tutorial using TF 2.2 and Adanet 0.9.0:
All tensors outfed from TPU should preserve batch size dimension, but got scalar Tensor("OutfeedDequeueTuple:1", shape=(), dtype=int64, device=/job:tpu_worker/task:0/device:CPU:0)
Running into the following RuntimeError when running the
adanet_tpu
tutorial usingTF 2.2
andAdanet 0.9.0
:All tensors outfed from TPU should preserve batch size dimension, but got scalar Tensor("OutfeedDequeueTuple:1", shape=(), dtype=int64, device=/job:tpu_worker/task:0/device:CPU:0)
I have made some minor changes to the original tutorial code, i.e. replacing of
tf.contrib
module withtf.compat.v1
equivalents where applicable, etc. as per the following Google Colab: https://colab.research.google.com/drive/1IVwzPL50KcxkNczaEXBQwCFdZE2kDEdeI have experienced the same issue when running
TF 2
with previousAdanet=0.8.0
version when training my own GCP project models on Cloud TPUs. Further details on this can be found on stackoverflow here :https://stackoverflow.com/questions/62266321/tensorflow-2-1-using-tpuestimator-runtimeerror-all-tensors-outfed-from-tpu-sho
Looking to establish whether I am potentially missing something for the migration over to TF 2 using Adanet?
The text was updated successfully, but these errors were encountered: