Mask-rcnn Jetson B01

Hi, for my internship I want to use MASK-RCNN on a Jetson NANO. I know that you’re unable to train this model on a jetson nano(B01) so I trained it on google colab. Right now I want to deploy the trained model on the jetson and found this topic: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN
I came to conclusion that this tutorial that the steps in the topic are supposed to be executed on other hardware (correct me if im wrong). The current situation is as follows:

Situation
I have a jetson nano b01 (4gb version), a laptop with an NVIDIA GeForce 940MX and thats about it. For my internship mask-rcnn doesnt have to be executed in realtime so detection time isn’t an issue (as i’ve read that it mask rcnn would be terrible slow on a jetson nano).

Question

  1. Is there any way for me to run my mask rcnn .h5 model on my jetson nano B01 (with or without converting it) with only using my available hardware?
  2. Is there a way to deploy a mask-rcnn model (.h5) without changes, on a jetson nano B01?
  3. Is there a way, with my hardware, to convert the mask rcnn model using the steps in the above mentioned topic?

Regards

Hi,

The sample can be run on Jetson platform but please checkout the release/7.1 branch for the compatibility.

The above sample demonstrates how to convert the Mask-RCNN model into TensorRT, and it’s expected to have an optimal performance on Nano.
If you prefer to run the model with Keras, you can install the TensorFlow package from this page.

By the way, I don’t think Nano is terrible slow if you try the model with TensorRT.
Some frameworks (ex. TensorFlow) tends to be slow since it doesn’t design for the shared memory environment like Jetson.
https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks

Thanks.

Thanks for the fast reply, i’ve been trying to follow the solution you posted but can’t seem to get past the cmake step from tensorRT.

I’m executing the cmake command as follows:
cmake -DCMAKE_CUDA_COMPILER:PATH=/usr/local/cuda/bin/nvcc -DCUBLASLT_LIB=/usr/local/cuda-10.2/targets/aarch64-linux/lib/libcublas.so …
I added the compiler path (don’t know why it didn’t find it by itself) and changed the cuda version to my version. When executing the make command it runs, builds most of it but always gives the following error (part of the error):

fcPlugin.cpp:(.text+0x391c): undefined reference to cublasLtMatmulAlgoCapGetAttribute' fcPlugin.cpp:(.text+0x3b88): undefined reference to cublasLtMatmulDescCreate’
fcPlugin.cpp:(.text+0x3ba0): undefined reference to cublasLtMatmulDescSetAttribute' fcPlugin.cpp:(.text+0x3bb4): undefined reference to cublasLtMatmulDescSetAttribute’
fcPlugin.cpp:(.text+0x3bcc): undefined reference to cublasLtMatrixLayoutCreate' fcPlugin.cpp:(.text+0x3be4): undefined reference to cublasLtMatrixLayoutCreate’
fcPlugin.cpp:(.text+0x3bfc): undefined reference to cublasLtMatrixLayoutCreate' fcPlugin.cpp:(.text+0x3fc0): undefined reference to cublasLtCreate’
fcPlugin.cpp:(.text+0x4044): undefined reference to cublasLtDestroy' fcPlugin.cpp:(.text+0x4280): undefined reference to cublasLtCreate’
fcPlugin.cpp:(.text+0x42fc): undefined reference to `cublasLtDestroy’

All the errors are related to not finding the cublas functions. Cublas is installed on the system. Any idea how to resolve this error?

Regards

Hi,

The libcublas.so location is changed in JetPack 4.4.1.
Could you update the library path and try it again?

cmake -DCMAKE_CUDA_COMPILER:PATH=/usr/local/cuda/bin/nvcc -DCUBLASLT_LIB=/usr/lib/aarch64-linux-gnu/libcublas.so

Thanks.