Deepstream without TensorRT

Is there anyway to use deepstream without TensorRT,I am using jetson nano,the issue is when using deepstream for my yolo model,I got very bad accuracy even my model is good when using without deepstream.So I think the issue is because of optimization the TensorRT did.So I wanna try deepstream without TensorRT.Thanks

Hi,

DeepstreamSDK leverages GStreamer frameworks, which is a component-based API.
You can run Deepstream pipeline without TensorRT by just updating the configure file.

But if you want to replace TensorRT with other frameworks.
You may need to implement it with GStreamer interface or handle the input/output on your own.

We don’s see any obviously accuracy degradation with TensorRT.
Our suggestion is to double check the pipeline to see if anything missed.

Are you using INT8 mode? If yes, please remember to generate your own calibration file first.

Thanks.

Similar issue:
https://devtalk.nvidia.com/default/topic/1068085/deepstream-sdk/tune-deepstream-yolov3_tiny-parameters-to-perform-as-darknet-version-without-deepstream-/

nvinfer preprocess doc: https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.01.html%23wwpID0E0IZ0HA

Thank you for response.
As you say ‘update the configure file’,I explore it but didn’t find where I can disable tensorRT,please can you give specific details what should I edit for disabling the tensorRT.

And for bad accuracy thing,one of forum moderator mentioned here(https://devtalk.nvidia.com/default/topic/1058944/tensorrt/worse-coco-map-for-yolov3-on-tensorrt/post/5384939/#5384939) that to use tensorRT 6.And here(https://devtalk.nvidia.com/default/topic/1064541/deepstream-sdk/deepstream-4-0-1-and-tensorrt-6-0-1-5/post/5391648/#5391648) moderator mentioned release of tensorRT 6 for deepstream is still in progress.

And for model file format I am using the default fp32.in config

network-mode=0

Waiting for your response.
Thanks

Thanks for response.
Let me check.

Didn’t find anything helpfull. Because discussion is not completed yet on the mentioned topic.BTW thanks.
Waiting for @AastaLLL reply.

Reply please.There is an urgent need.
Thanks

Hi,

TensorRT component is shown as [primary-gie] or [secondary-gie].
You can just turn it off.

However, as mentioned before this will require you to handle the input/output of darknet frameworks on your own.
Suppose darknet use OpenCV inferface, you can check this comment to see if helps.
https://devtalk.nvidia.com/default/topic/1047620/deepstream-sdk/how-to-create-opencv-gpumat-from-nvstream-/post/5397368/#5397368

Thanks.

Right,so it needs some c/c++ code.There is no any simple way?
BTW tomorrow by hit&try and analyzing terminal I disable tensorRT by put enable=0 under [primary-gie].But then I didn’t get any detection so the issue might be I have to put some code as you mentioned.
And can you please tell what does gie stands for,I search but didn’t found anything.
And I don’t wanna use any other framework inplace of tensorRT,but I just wanna use deepstream without tensorRT.Please update here if there is any simple way of doing that or in near future.Thanks

Hi,

Disable TensorRT indicates turnoff the inference module, so there is no detection output.
GIE is the TensorRT previous name. Sorry to make you the confused.

It’s simple to use Deetstream since it is a component based API.
Just declare the pipeline you want in the configure file and it should work.

We can provide a sample for your refernece.
Would you mind to tell us what kind of pipeline do you want?

Please noticed that you won’t have an inference output once turning-off TensorRT.
So the possible pipeline looks like: decode -> resize -> display.

Thanks.

Right. Actually Detection is must for my case.And thanks for that explanation. Please be little quick in response.Thanks