I’m using Windows 10 NSIGHT Compute for profiling a Python Tensorflow script using the GPU, but I’ve been getting the following errors:
==PROF== Profiling - 1: 0%…50%…100%
==PROF== Profiling - 2: 0%…50%…100%
2020-01-07 22:50:37.733710: E tensorflow/stream_executor/gpu/gpu_timer.cc:87] Invalid argument: error recording CUDA event on stream 0x27197cec670: CUDA_ERROR_UNKNOWN: unknown error
2020-01-07 22:50:37.736016: E tensorflow/stream_executor/gpu/gpu_timer.cc:55] Internal: error destroying CUDA event in context 0x2717be08d20: CUDA_ERROR_UNKNOWN: unknown error
2020-01-07 22:50:37.739478: E tensorflow/stream_executor/gpu/gpu_timer.cc:60] Internal: error destroying CUDA event in context 0x2717be08d20: CUDA_ERROR_UNKNOWN: unknown error
2020-01-07 22:50:37.741763: F tensorflow/stream_executor/cuda/cuda_dnn.cc:189] Check failed: status == CUDNN_STATUS_SUCCESS (7 vs. 0)Failed to set cuDNN stream.
==PROF== Report: profile.nsight-cuprof-report
While Nsight Compute is at the 2nd line above (Profiling), it freezes at 0% and then the screen turns black for a second before displaying the rest of the output.
I’m using an NVIDIA TITAN RTX GPU, driver version 436.48, Python 3.7.4, Tensorflow-GPU 1.14.0, and cudatoolkit 10.0 on Windows 10.
My system has 2 GPUs - both are TITAN RTX.
Based on the documentation, GPUs with Turing architectures should be supported (which the Titan RTX GPU has). This only happens when I run a Tensorflow session with sess.run().
Does anyone know why this is happening?
UPDATE: Updated my post since I put in the wrong details originally and had installed Python locally (without Anaconda)