Tensorflow Runtime Error on TX2

On TX2, JetPack 3.1, I got this error while running FCN,

F tensorflow/stream_executor/cuda/cuda_dnn.cc:222] Check failed: s.ok() could not find cudnnCreate in cudnn DSO; dlerror: /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so: undefined symbol: cudnnCreate
Aborted (core dumped)

But I have libcudnn installed here,

nvidia@tegra-ubuntu:~/tx1-new-1128/ox_code/build/bin$ env | grep LD
LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:/usr/local/lib:/lib/:/usr/lib:/usr/local/cuda/lib64:/usr/lib/aarch64-linux-gnu/
nvidia@tegra-ubuntu:~/tx1-new-1128/autox_code/build/bin$
nvidia@tegra-ubuntu:/usr/lib/aarch64-linux-gnu$ ls libcudnn*
libcudnn.so libcudnn.so.6 libcudnn.so.6.0.21 libcudnn_static.a libcudnn_static_v6.a

Could anyone help?

thanks,

Steve

Hi,

The error may come from different cuDNN version.
Please confirm you are using the identical cuDNN version of TensorFlow building.

Thanks.

Good question, I don’t know. I didn’t build TensorFlow myself. I got tensorflow-1.0.1-cp27-cp27mu-linux_aarch64.whl from https://devtalk.nvidia.com/default/topic/1027310/jetson-tx2/tensorflow-runtime-error-on-tx2/post/5225237/.

Hi,

Your link looks incorrect. Please recheck it.

Usually, we use this wheel for JetPack3.1, for your reference.

Thanks.

I built TensorFlow locally and now it worked perfectly. My FCN is up and running even its performance is not as good as I expected (~2.4fps).

thanks!

Steve

Hi,

Good to know this.
We also have a tutorial for FCN with TensorRT. For your reference:
https://github.com/dusty-nv/jetson-inference#image-segmentation-with-segnet

Thanks.