Hi, were having an issue running a number of models on a 1660 Ti. We tested it in both Ubuntu 18.04.3 LTS and CentOS 7. Error is “Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR”. There seems to be a suggested fix: Add “config.gpu_options.allow_growth = True” which we did, but it doesn’t seem to help. We installed driver version “440.59”.
import keras.backend as K
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
K.set_session(session)
Could you please share the sample repro script and model file so we can help better?
Also, can you provide details on the platforms you are using:
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version
See answers below. The notebook is attached, and the model URL is in the original post
o CUDA version - CUDA Version: 10.0
o CUDNN version - CUDNN Version 7.6.2 (also tried 7.6.5, same result)
o Python version [if using python] - Python 3.6.8
o Tensorflow and PyTorch version - TF version: 1.15.0, no PyTorch
o TensorRT version - not installed 05_nudenet.zip (4.03 KB)