Video detection using deepstream cannot save the results correctly

DeepStream 6.0.1
Yolo v5 6.1
jetpack 4.6

I am running yolov5 through deepstream to detect the video, and I want to generate the detection result of each frame. In the previous post, I was suggested to use deepstream_ test1_ app. C through frame_ number, num_ Rects and other meta implementations. (sorry I am new to deepstream)

I try to find easier and quickly way .And I found:
First, the track plug-in can generate TXT files of the detection results. So I added the following to the config TXT file:

# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
# ll-config-file=config_tracker_NvDCF_accuracy.yml
# ll-config-file=config_tracker_DeepSORT.yml

And I run : deepstream-app -c configname.txt
I got many txt files in dir ‘track’,but all of them are empty.

Second,I notice there is "write_kitti_output"in the deepstream_app.c,I add:gie-kitti-output-dir path to config file:


Then I run :deepstream-app -c config.txt,I get many txt files in dir track1.But all of them are empty,too.

Did TensorRT generate results for yolov5 model?

yes,I can see result by tile for display

There objects detected shown in the video?


All of them empty means the value for left, top, right, bottom, confidence all zero?

No, the output TXT file is empty,nothing inside.0 byte.

Can you get the correct file using builtin model resnet10?

I’ve solved this problem now, and the new problem is that my model has good detection capabilities under pytorch, and poor detection capabilities when deployed on deepstream after conversion to tensorrt files.Why is this happening?

Did you try do the inference directly with TensorRT? how about the detection results?