I’m building a faster rcnn model from a tlt docker, then make an engine file with tlt-converter on my computer. I transfer that engine to my Jetson Nano 2GB, but I get error messages when I try to run deepstream with that engine.
After a few research the error seems to indicate that the problem comes from the fact that we have a different version of TensorRT and CUDA (and cuDNN) between the tlt docker (TensoRT=7.2, CUDA=11.1) and on the Jeston Nano (TensoRT=7.1, CUDA=10.2).
Is there a docker available with the correct TensoRT/CUDA versions to be compatible with the Jetson Nano or should I make a custom one?