Problem running ssd_mobilenet_v2 on jetson nano through pb to uff to engine

Hi all.

It has been know 2 days since I tried to run the ssd_mobilenet_v2_2018_03_29 on Jeton Nano by passing the .pb to .uff to .engine, without success…

I tried this tutorial :

Ending up with “TypeError: Cannot convert value 0 to a TensorFlow Dtype” I think my packages/libraries version are too new for the code used, but cannot find what to change to make it fit, if it is that the problem.
The full error display :

The error was about an update of the graphsurgeon converter which is mentioned here :
GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT
So I added the following lines in the node_manipulation.py file:

node.name = name
node.op = op if op else name
node.attr["dtype"].type = 1
for key, val in kwargs.items():
     if key == "dtype":
         node.attr["dtype"].type = val.as_datatype_enum

After running main.py again I have in the terminal:

(…)
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:143] Marking [‘NMS’] as outputs
No. nodes: 1094
UFF Output written to tmp.uff
#assertionflattenConcact.cpp,49

aborted

And a window appeared 3 times during the execution saying :
problem

I have put a few prints on the code to see where it stops. Apparently it is the line:

parser.parse(‘tmp.uff’,network)

That is not doing well. I put one print before, it displayed itself, and then one after it has not. Do not know why…


An other tutorial tried:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffSSD
A uff file is generated here, but when I keep on following steps and put ssd_mobilenet_v2 (because was built for ssd_inception_v2) it ends up with “all concat input tensors must have the same dimensions”

I have :
Jetpack 4.5.1
TensorRT 7.1.3
CUDA 10.2
cuDNN 8.0
Tensorflow 1.15.5

Can somebody give me hint or a tutorial for TensorRT 7.1 with my Jetpack ?
Or even the .bin or .engine file if the configuration is the same as mine ^^

Thanks :)

I finally managed to put ssd_mobilenet_v2 on Jetson Nano.

This topic was very helpul:
https://www.minds.ai/post/deploying-ssd-mobilenet-v2-on-the-nvidia-jetson-and-nano-platforms

Globally, after a lot of try, the code in this configuration file https://static.wixstatic.com/archives/f05f97_3388f445c69c4b338497c878b142c996.zip
is almost good, just need to replace the inputOrder to inputOrder=[1, 0, 2]

Also do not forget to add in the node_manipulation.py:

> node.name = name
> node.op = op if op else name
> node.attr["dtype"].type = 1
> for key, val in kwargs.items():
>      if key == "dtype":
>          node.attr["dtype"].type = val.as_datatype_enum

Also in main.py the following line has to be commented:

ctypes.CDLL("lib/libflattenconcat.so")

and it is working fine for me in
Jetpack 4.5.1
TensorRT 7.1.3
CUDA 10.2
cuDNN 8.0
Tensorflow 1.15.5

Glad to know issue resolved, thanks for sharing!

1 Like