Is there an example on how to use a custom TensorRT model with nvinfer

I would like to use a custom tensorRT model with a selfmade deepstream pipeline, but I was not able to find an example on how to use a custom tensorRT model with the nvinfer plugin.

Hi,

This can be done by a customized configure file.
You can find some example below:

/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/config_infer_*

Then you can launch a Deepstream pipeline in two way:

1. gst-launch-1.0

$ gst-launch-1.0 ... ! nvinfer config-file-path=config_infer_primary.txt ...

2. deepstream-app

Ex.

$ deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt

source30_1080p_dec_infer-resnet_tiled_display_int8.txt

[primary-gie]
...
config-file=config_infer_primary.txt

For more details, please check our document below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html

Thanks.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.