Custom trained model on Jetson Nano

Hey Guys,
i wanted to reuse a custom model, i trained with Tensorflow 1.12 on Jetson Nano (JetPack 4.3): SSD MobileNet V2 1024x1024.
I tried some convertion tools like: test_nms_fixed.py (5.2 KB)
The conversion failed mutiple times some issues could be solved but not all.
I know that the script has to be configured for my specific case but i dont know how exactly.
Could you please provide me a configured convertions script which works for my model?
Here is the my model i use: ssd_mobilenet_v2_egohands_ist.pb (3.0 MB)

Hi,

Do you want to inference a frozen TensorFlow model with TensorRT?
If yes, it’s more recommended to use ONNX as an intermediate format since uff is deprecated.
https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-7.html#tensorrt-7

The workflow has two steps:
1. Convert .pb model into the ONNX format.
You can do this via tf2onnx module below:

2. Inferece ONNX model with TensorRT.
You can find an example in the below comment:

Thanks.

Thx for the reply.
I successfully converted my model to onnx format but running inference script ( Custom ResNet Jetson Xavier - #3 by AastaLLL ) throws same error as this https://imgur.com/OD8laNl.
Heres my recent model: model.onnx (2.9 MB)
Do i have to change the inference script or do i need to configure the ONNX converter?

Hi,

Your ONNX model uses a dynamic shape, but the corresponding information (profile) doesn’t be set.
In general, you don’t need the dynamic shape support if the image size won’t change at runtime.
(it is 1024x1024x3 in your use case)

To avoid the error, please specified the batch size in the ONNX model with the below steps:

Thanks.

Thx so much for your support. Im pretty close to the end now the inference runs but my outputs are not really handy:
grafik
These are the print code lines:


How can i do some postprocessing to visualize bounding box with these outputs?
Or are they systematic wrong?

Hi,

You can find the postprocessing for ssd_mobilenet_v2 in the below script:

Thanks.

This wont work for my specific case, bc they implemented a custom postprocessing plugin (NMS) at the end of their conf/loc nodes.
Do you think i need to do NMS postprocessing by software?
Another strange point is the output values being in the range between -10^(-7) and 10^(-7) so really small, even though i gave a picture, including ~20 detectable objects.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Do you get the same output with TensorRT compared to your training frameworks?
What is the output value range when running the model with your training frameworks?

Thanks.