Video detection using deepstream cannot save the results correctly

TX2 NX 
DeepStream 6.0.1
Yolo v5 6.1
jetpack 4.6

I am running yolov5 through deepstream to detect the video, and I want to generate the detection result of each frame. In the previous post, I was suggested to use deepstream_ test1_ app. C through frame_ number, num_ Rects and other meta implementations. (sorry I am new to deepstream)

I try to find easier and quickly way .And I found:
First, the track plug-in can generate TXT files of the detection results. So I added the following to the config TXT file:

[application]
...............
kitti-track-output-dir=/home/lclc/Desktop/track
[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
ll-config-file=config_tracker_NvDCF_perf.yml
# ll-config-file=config_tracker_NvDCF_accuracy.yml
# ll-config-file=config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

And I run : deepstream-app -c configname.txt
I got many txt files in dir ‘track’,but all of them are empty.

Second,I notice there is "write_kitti_output"in the deepstream_app.c,I add:gie-kitti-output-dir path to config file:

[application]
.................................
gie-kitti-output-dir=/home/lclc/Desktop/track1

Then I run :deepstream-app -c config.txt,I get many txt files in dir track1.But all of them are empty,too.

Did TensorRT generate results for yolov5 model?

yes,I can see result by tile for display

There objects detected shown in the video?

yes

All of them empty means the value for left, top, right, bottom, confidence all zero?

No, the output TXT file is empty,nothing inside.0 byte.

Can you get the correct file using builtin model resnet10?

I’ve solved this problem now, and the new problem is that my model has good detection capabilities under pytorch, and poor detection capabilities when deployed on deepstream after conversion to tensorrt files.Why is this happening?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Did you try do the inference directly with TensorRT? how about the detection results?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.