Jetson Xavier AGX inferences error

• Hardware Platform (Jetson / GPU): Jetson Xavier AGX
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1
• TensorRT Version: 7.1.3
Hello , I am trying to run deepstream-segmentation python code on AGX.
Here is the GitHub link: (deepstream_python_apps/apps/deepstream-segmentation at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub)
I put the trained model I trained, also edit the “dstest_segmentation_config_semantic.txt”
But I met a problem, below is the problem I met, how can I solve it, thanks.


P.S. I had tried the model which downloaded from the GitHub, it totally ran successfully, show below.

Can you revert your changes one by one to narrow down which change cause the fail as it works fine without your change?

I highlighted the information what i had changed based on my model, show below.


The error is show below, also the files in the folder.

Is there any information in the python file that needs to be changed?

Seems error report in python code. Can you have a debug?

I find something strange in the error message.
The thing is that the model I exported its output node is “Softmax_1 (Softmax)” and its output shape is “(None, 720, 1280, 19)”, show below.


The code in “deepstream_segmentation.py” is “bgr = np.zeros((shp[0], shp[1], 3))”, I print the shp[0] and shp[1] and get 1280 and 19, I think the correct ones should be gotten are 720 and 1280.

I guess that’s why there are errors.


I have directly changed “bgr = np.zeros((720,1280,3))”, but it still not work.

How can I solve it?
Thanks.

Do you output the model output and parsed the model output in python?

Yes, I had.
My model output node is “Softmax_1 (Softmax)” and its output shape is “(None, 720, 1280, 19)”, show below.


When I do inferences, the output node it get is also “Softmax_1 (Softmax)”, show below.

Sorry, it is related your special model and the model output parse. I can’t provide more help.

I see!
But the training process just followed the "Open model architecture-unet.ipynb"file that Nvidia provided.

The output node after training is “conv2d_5 (Conv2D)” and shape is"(19, 720, 1280)".
But when I export the model will get two more nodes named “permute_1 (Permute)” with shape(720, 1280, 19) and “Softmax_1 (Softmax)” with shape(720, 1280, 19).


The output node shape just change the order from (19,720, 1280) to (720, 1280, 19), thus when I do inferences on AGX it will get (1280, 19) instead (720, 1280), and the error happened.
Why it change the order?
Is there anything can do to maintain the output node shape’s order (19,720,1280)?

Are you use TAO for training model?

Yes, I did.

Hi,
This looks more related to TAO. We are moving this post to the TAO forum to get better help.
@Morganh , please help, thank you.

Please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream to deploy TAO unet model in deepstream.

I just did it this way!
Now I’m doing inferences in the path “deepstream_python_apps/apps at v1.0.3 · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

So, may I know if you can run your TAO Unet model successfully with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream ? Officially it is the recommended github mentioned in TAO user guide.

Yes, I can run my TAO Unet Model successfully in the path “deepstream_tao_apps/apps/tao_segmentation at release/tao3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub”.

OK, as mentioned above, officially it is the recommended github mentioned in TAO user guide.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.