You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The python 3 GPU docker file specifies ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7, which is the compute capability for the K80s. AWS also has g3's M80 cards, which have compute capabilities 5.2. Could that line be changed to ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2 so that the TF that's built is optimized for all AWS GPU offerings?
2018-02-04 22:39:16.960722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1093] Ignoring visible gpu device (device: 0, name: Tesla M60, pci bus id: 0000:00:1b.0, compute capability: 5.2) with Cuda compute capability 5.2. The minimum required Cuda capability is 7.0.
This stems from the same issue (on the dl/tensorflow/1.4.0/Dockerfile-py3.gpu.cuda9cudnn7_aws dockerfile)
ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,7.0 should perhaps be ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2,7.0?
The python 3 GPU docker file specifies
ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7
, which is the compute capability for the K80s. AWS also has g3's M80 cards, which have compute capabilities5.2
. Could that line be changed toENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2
so that the TF that's built is optimized for all AWS GPU offerings?See nvidia for listing
The text was updated successfully, but these errors were encountered: