DeepStream compatibility issues with UNET output layer change

I have also checked it works well in previous version.

I guess the root cause might be to-etlt-from-tlt model conversion in tao, because the output layer is changed in the log of the latest version of the “tao export” command.

Thanks for sharing your information!

Hi @lucasp ,
For 22.05 version model, could you comment out below line to let deepstream generate tensorrt engine?
# model-engine-file=/workspace/deepstream-huhf-unet/model_files/model_huhf_v0_600_cal_int8.etlt_b1_gpu0_int8.engine

More, please git clone the latest GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream . There are some changes about 15 days ago.

I am still interested in this thread but have been working on other projects. I will attempt the newly recommended fixes soon. Please don’t close the thread yet.

OK, please let us know when you have new update.

We faced the same issue of Deepstream not giving any output for a custom UNet model trained on TAO version - tao-toolkit-tf:v3.22.05-tf1.15.5-py3 where the exported etlt model has argmax layer.

For now, our workaround is to use the tlt file trained in current version of TAO and export this model in an older version (tao-toolkit-tf:v3.21.11-tf1.15.5-py3 in our case) which doesn’t replace the softmax layer in the etlt file. This is working for us, we are able to get output in Deepstream from the exported model etlt file.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please git clone the latest GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.