Hi All,
At the same platform, the Nvidia 2080 Super card can’t run tensorflow-GPU, but the Nvidia 2080Ti can do.
On the Ubuntu 18.04.1 LTS. I have been successfully established test environment for tensorflow-gpu with NV2080 Super card (install cuda_10.0.130_410.48_linux.run + cudnn-10.0-linux-x64-v7.4.2.24 + Anaconda3-5.2.0-Linux-x86_64 + tensorflow-gpu1.13.1 ),but running “python tf_cnn_benchmarks.py --num_gpus=1 --batch_size=64 --model=resnet50 --variable_update=independent --local_parameter_device=gpu”, the system occurred error.The model=inception3 / alexnet / vgg16 has the same issue. If insert 2080Ti on this platform, it can successfully running.
I have tried on the Ubuntu 16.04.4 LTS, it has the same issue.
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py”, line 234, in call
name=self.name)
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py”, line 1953, in conv2d
name=name)
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py”, line 1071, in conv2d
data_format=data_format, dilations=dilations, name=name)
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py”, line 788, in _apply_op_helper
op_def=op_def)
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py”, line 3616, in create_op
op_def=op_def)
File “/root/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py”, line 2005, in init
self._traceback = tf_stack.extract_stack()
Nvidia 2080 super log.txt (27.9 KB)
Nvidia 2080 Ti log.txt (12.2 KB)