Please forgive me for my lack of english or technical knowledge, I am a newbie:)
Steps I followed–
Created a custom model on my laptop(intel i3, windows 10, tensorflow-2.0 CPU). Tried inferencing , and it works on my laptop
Saved this model
Transferred and Loaded this model in Jetson Nano.
The jetson nano was flashed with the official image from the jetson nano website.
It has cuda version 10.0 & cudnn version 7.5. Jetson nano has the official nvidia tensorflow-2.0, downloaded from Jetson nano website, along with the pip3, python3 and other development essentials as on the same page itself.
Tried inferencing this model on nano. However it says, “failed to get convolution algorithm. cuDNN failed to initialize”.
Tried converting the ‘sample.h5’ keras model obtained in step 2, into uff/tensorrt graph file. Again tried inferencing on nano, but same error as in step 4.
In the next step, I created a similar model as in step 1 but now in google colab (gpu runtime, tensorflow 1.15, cudnnn version 7.6.5, cuda version 10.0.130). The inferencing of the model also works then and there in google colab itself.
However, when I download and save this model on Jetson nano, It gives me the same cudnn error.
I tried referring the solutions mentioned as on this below page and many other blogs, pages. However failed to solve.
Is the problem with jetson nano cudnn version/cuda version which I need to match with google colab??
If yes, how should I change this cudnn/cuda version of my jetson nano, as cudnn & cuda on jetson nano comes preinstalled while flashing the memory card with the image(I did nothing extra to put or install cuda/cudnn in the jetson nano, as it was originally loaded with the respective versions as in step 2)
If no, then please let me know if I can work other way around and change the cudnn version of google colab so as to match it as my nano.
Will wait for your reply as I am in a mess, and need to complete and submit this project asap.
Thanks and regards,