I would like to use a custom tensorRT model with a selfmade deepstream pipeline, but I was not able to find an example on how to use a custom tensorRT model with the nvinfer plugin.
This can be done by a customized configure file.
You can find some example below:
Then you can launch a Deepstream pipeline in two way:
$ gst-launch-1.0 ... ! nvinfer config-file-path=config_infer_primary.txt ...
$ deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt
[primary-gie] ... config-file=config_infer_primary.txt
For more details, please check our document below:
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.