it will generate some jpg files, but when I run it multiple times, I got different count of jpg files, I don’t know is it a bug. I didn’t change any code or config in this test.
**• Requirement details:
The results showed abave, I didn’t change code in out1-out6, the output is different,
then I try to set download=true in uridecodebin, which generate out7-out8, it is also different result count.
my gpu is RTX 3060, and I just tested in docker deepstream:6.1-devel with the same test.mp4 for 4 times, it generated 572/588/580/588 jpg files,
this is the file with
deepstream-image-meta-test file:///test.mp4 2>&1 | tee test.log
Using Tesla T4, I can reproduce this issue in docker deepstream:6.1-devel with the same test.mp4, in ten tests, one got 383 jpg files, the other nine got 295 jpg files.
will add logs in nvinfer plugin to debug.
please set model-engine-file in ds_image_meta_pgie_config.txt for workaround. for example:
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
on tesla t4, I run 30 times after set model-engine-file, every time there will be 585 pictures, t4-30times.txt (2.0 KB)
please verify this workaround on your t4, thanks!
is your latest test based on RTX 3060? currently we have no this device, we will continue to check this new case, Using the same engine, the objects numbers still vary.
Yes you’re right.
I tested in tesla t4 and rtx3060 for 10 times with specified the engine,
it generated the same outputs in t4, and differents outputs in rtx3060.
Thank your for yours patiently reply.
Will your team solve the issue or just focus on server and jetson series?
Here is a deepstream bug on T4.
if network-mode is INT8 and did not set model-engine-file, deepstream inference can’t give the same results every time, for example: some results have bboxs, some have not. solution:
set model-engine-file to the engine generated by trtexec, here is the command:
/usr/src/tensorrt/bin/trtexec --calib=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin --deploy=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.prototxt --model=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel --maxBatch=1 --saveEngine=resnet10.caffemodel_b1_gpu0_int8.engine --buildOnly --output=conv2d_bbox --output=conv2d_cov/Sigmoid --precisionConstraints=obey --layerPrecisions=conv2d_cov:int8 --layerOutputTypes=conv2d_cov:int8 --int8
especially, “–precisionConstraints=obey --layerPrecisions=conv2d_cov:int8 --layerOutputTypes=conv2d_cov:int8” is used to specify INT8 inference precision to the conv2d_cov layer.