Hi all.
It has been know 2 days since I tried to run the ssd_mobilenet_v2_2018_03_29 on Jeton Nano by passing the .pb to .uff to .engine, without success…
I tried this tutorial :
Ending up with “TypeError: Cannot convert value 0 to a TensorFlow Dtype” I think my packages/libraries version are too new for the code used, but cannot find what to change to make it fit, if it is that the problem.
The full error display :
The error was about an update of the graphsurgeon converter which is mentioned here :
→ GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT
So I added the following lines in the node_manipulation.py file:
node.name = name node.op = op if op else name node.attr["dtype"].type = 1 for key, val in kwargs.items(): if key == "dtype": node.attr["dtype"].type = val.as_datatype_enum
After running main.py again I have in the terminal:
(…)
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:143] Marking [‘NMS’] as outputs
No. nodes: 1094
UFF Output written to tmp.uff
#assertionflattenConcact.cpp,49aborted
And a window appeared 3 times during the execution saying :
I have put a few prints on the code to see where it stops. Apparently it is the line:
parser.parse(‘tmp.uff’,network)
That is not doing well. I put one print before, it displayed itself, and then one after it has not. Do not know why…
An other tutorial tried:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffSSD
A uff file is generated here, but when I keep on following steps and put ssd_mobilenet_v2 (because was built for ssd_inception_v2) it ends up with “all concat input tensors must have the same dimensions”
I have :
Jetpack 4.5.1
TensorRT 7.1.3
CUDA 10.2
cuDNN 8.0
Tensorflow 1.15.5
Can somebody give me hint or a tutorial for TensorRT 7.1 with my Jetpack ?
Or even the .bin or .engine file if the configuration is the same as mine ^^
Thanks :)