Deepstream-image-meta-test,got different results per running

**• Hardware Platform: dGPU
**• DeepStream Version: 6.1
**• TensorRT Version: 8.4.1-1+cuda11.6
**• NVIDIA GPU Driver Version: 510.73.05
**• Issue Type: bugs/questions
**• How to reproduce the issue ?
run deepstream-image-meta-test with a mp4 file

sudo ./deepstream-image-meta-test file:///xxx/test.mp4

it will generate some jpg files, but when I run it multiple times, I got different count of jpg files, I don’t know is it a bug. I didn’t change any code or config in this test.
**• Requirement details:
image
The results showed abave, I didn’t change code in out1-out6, the output is different,
then I try to set download=true in uridecodebin, which generate out7-out8, it is also different result count.

  1. I can’t reproduce this issue.
  2. please provide your video, code diff, configuration file.
  3. please provide your out1 and out2 by net disk, we need to check the difference, thanks!

https://drive.google.com/file/d/134P68Y8Hnx2LNH07VpRz3Q31ydRDanFc/view?usp=sharing
this link contains test.mp4 and out1,2,3,8
no codes and configs are different from the official test code,
deepstream_image_meta_test.c (21.0 KB)
ds_image_meta_pgie_config.txt (3.3 KB)

  1. Using your test.mp4 , I still can’t reproduce your issue. every time there will be 585 pictures. it is the same with your out3.
  2. what is your GPU model? did you test in docker nvcr.io/nvidia/deepstream:6.1-devel? could you provide the terminal logs after test? thanks!

my gpu is RTX 3060, and I just tested in docker deepstream:6.1-devel with the same test.mp4 for 4 times, it generated 572/588/580/588 jpg files,
this is the file with

deepstream-image-meta-test file:///test.mp4 2>&1 | tee test.log

test.log (167.5 KB)

ps: the bug I initially found in Tesla t4, deepstream 6.0 docker programe with my own code.

Using Tesla T4, I can reproduce this issue in docker deepstream:6.1-devel with the same test.mp4, in ten tests, one got 383 jpg files, the other nine got 295 jpg files.
will add logs in nvinfer plugin to debug.

Thank you, waiting for your good news.

please set model-engine-file in ds_image_meta_pgie_config.txt for workaround. for example:
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine

I just removed model-file and proto-file, added model-engine-file config, it generates 579/585/584 files with three running.

please provide the whole logs after set model-engine-file? please check if engine file will be created every time.

-rw-r--r-- 1 root root 2.3M Jul  8 09:23 resnet10.caffemodel_b1_gpu0_int8.engine

It didn’t create new engine file.
Below log file is I ran it with 587 output.
test.log (167.5 KB)

  1. on tesla t4, I run 30 times after set model-engine-file, every time there will be 585 pictures,
    t4-30times.txt (2.0 KB)
    please verify this workaround on your t4, thanks!
  2. is your latest test based on RTX 3060? currently we have no this device, we will continue to check this new case, Using the same engine, the objects numbers still vary.

Yes you’re right.
I tested in tesla t4 and rtx3060 for 10 times with specified the engine,
it generated the same outputs in t4, and differents outputs in rtx3060.
Thank your for yours patiently reply.
Will your team solve the issue or just focus on server and jetson series?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Here is a deepstream bug on T4.
if network-mode is INT8 and did not set model-engine-file, deepstream inference can’t give the same results every time, for example: some results have bboxs, some have not.
solution:
set model-engine-file to the engine generated by trtexec, here is the command:
/usr/src/tensorrt/bin/trtexec --calib=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin --deploy=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.prototxt --model=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel --maxBatch=1 --saveEngine=resnet10.caffemodel_b1_gpu0_int8.engine --buildOnly --output=conv2d_bbox --output=conv2d_cov/Sigmoid --precisionConstraints=obey --layerPrecisions=conv2d_cov:int8 --layerOutputTypes=conv2d_cov:int8 --int8

especially, “–precisionConstraints=obey --layerPrecisions=conv2d_cov:int8 --layerOutputTypes=conv2d_cov:int8” is used to specify INT8 inference precision to the conv2d_cov layer.