How to debug a custom tensorrt model using tensorRT network API in the deepstream

Following the given example code of yolo network, I tried to define my own custom tensorRT network and implement it in the deepstream application, but I found the tensorrt output result in the nvinfer plugin does not meet my output expectations. Could you tell me a way to print the input batch data of the network and the ouputs of the each layer? I am so confused because I cannot locate where is the problem.

By the way, the network works very well when I only use it in tensorrt rather than deepstream application.


Do you use C++ interface or python sample?
If python, you can output NvDsInferTensorMeta based on below example: