Deploying Custom segmentation model to deepstream-test5

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):

Thu Mar 7 20:21:59 2024
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
| 0 NVIDIA GeForce … Off | 00000000:01:00.0 Off | N/A |
| N/A 44C P0 N/A / 80W | 6MiB / 6144MiB | 0% Default |
| | | N/A |

| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
| 0 N/A N/A 2904 G /usr/lib/xorg/Xorg 4MiB |

• DeepStream Version:6.2

We had trained an custom Pytorch model using the below repository

We are able to successfully run the model and observe inference outputs using both .pth and .onnx formats outside of DeepStream. However, when we attempt to integrate the ONNX model with the DeepStream SDK 6.2 as the primary GIE in the deepstream-test5-app, although the ONNX to TensorRT engine conversion occurs without issues and the application runs, we do not see any inference results in the output. Additionally, when we enable the Kafka sink, the application fails. We would appreciate your support in resolving these issues to successfully port our model to the DeepStream test application. We have attached all the necessary files in the below link for you to replicate the scenario as well the the weight files.

You can refer to the example below.


In addition, you can read the part about NvDsInferSegmentationMeta in the document. nvinfer will put the inference results into the metadata.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

sorry for the long delay. If you want to use unet, here is an official sample for reference.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.