How to replace the TensorRT model from deepstream with our model files

We want to use deepstream3.0 and TensorRT to detect from video.
I have downloaded DeepStreamSDK-Tesla-v3.0 and the demo can work.
But I dont know how to use our TensorRT model to replace the demo’s TensorRT model.
I want to know the input and output paras of TensorRT model in deepstream.
The code of deepstream demo is not enough to get the debug infomations of the input and output paras of TensorRT model.
where can i find more documents or infomations.
Thanks.

Hi chenzongxi123,

Please migrate to the DeepStream 4.0 and refer to this thread: [url]NvDsInferParseCustomSSD deepstream 4.0 - DeepStream SDK - NVIDIA Developer Forums

Hi Kayccc, thanks your reply.
Dose the file of gstnvinfer.c open source? Where can i find it?
There are some error info which are gstnvinfer.c printing. I want to debug the code of gstnvinfer.c to find the error.

And It is necessary to migrate to the DeepStream 4.0?

Yes, please use DeepStream SDK 4.0 for more features and demo samples.