EDIT: Here are a few issues in the log, but I don’t know what that means:
It’s 1080P@30FPS, it works with deepstream perfectly OK. Why buffer dropped???
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
I think engine file will be generated first time we launch the program, why?
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine open error
0:00:00.232730289 64425 0xaaab09152860 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 3]: deserialize engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed
0:00:00.232765170 64425 0xaaab09152860 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 3]: deserialize backend context from engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed, try rebuild
0:00:00.232781170 64425 0xaaab09152860 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 3]: Trying to create engine from model files
WARNING: INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
0:02:02.757295391 64425 0xaaab09152860 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 3]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_fp16.engine successfully
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_1:0 3x224x224 min: 1x3x224x224 opt: 16x3x224x224 Max: 16x3x224x224
1 OUTPUT kFLOAT predictions/Softmax:0 6 min: 0 opt: 0 Max: 0
0:02:03.170268895 64425 0xaaab09152860 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:dstest2_sgie2_config.txt sucessfully
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:02:03.185914580 64425 0xaaab09152860 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet_pruned.onnx_b16_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_1:0 3x224x224 min: 1x3x224x224 opt: 16x3x224x224 Max: 16x3x224x224
1 OUTPUT kFLOAT predictions/Softmax:0 20 min: 0 opt: 0 Max: 0
0:02:03.186074872 64425 0xaaab09152860 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet_pruned.onnx_b16_gpu0_int8.engine
0:02:03.199906046 64425 0xaaab09152860 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:02:03.278966108 64425 0xaaab09152860 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0
0:02:03.279074367 64425 0xaaab09152860 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
0:02:03.285028217 64425 0xaaab09152860 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest2_pgie_config.txt sucessfully
I checked int8.engine is not existed but fp16.engine instread, and the configuration is int8, why?
BTW: if I use demo file, there will be print those warnings also. And it take quite a lot of time.
daniel@daniel-nvidia:~/Work/jetson-fpv$ ls /home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine
ls: cannot access '/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine': No such file or directory
daniel@daniel-nvidia:~/Work/jetson-fpv$ ls /home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/
cal_trt.bin labels.txt resnet18_vehicletypenet_pruned.onnx resnet18_vehicletypenet_pruned.onnx_b16_gpu0_fp16.engine
It seems FPS rate is OK, about 30FPS. If there any thing wrong with the deepstream/dcf pipline configurations?
daniel@daniel-nvidia:~/Work/jetson-fpv$ python3 ./utils/deepstream/deepstream_NvDCF.py -s -i rtp://@:5600
Current working directory: /home/daniel/Work/jetson-fpv
New working directory: /home/daniel/Work/jetson-fpv/utils/deepstream
{'input': ['rtp://@:5600'], 'input_codec': 'h264', 'silent': True}
Creating Pipeline
Creating streamux
Is it Integrated GPU? : 1
Creating nv3dsink
Adding elements to Pipeline
Creating source_bin 0
Creating rtp h264 bin
source-bin-00
0.0.0.0 5600
Starting pipeline
Opening in BLOCKING MODE
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine open error
0:00:00.222629957 88135 0xaaab15d34410 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 3]: deserialize engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed
0:00:00.222664742 88135 0xaaab15d34410 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 3]: deserialize backend context from engine from file :/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_int8.engine failed, try rebuild
0:00:00.222682087 88135 0xaaab15d34410 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 3]: Trying to create engine from model files
WARNING: INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
0:01:53.478484725 88135 0xaaab15d34410 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 3]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet_pruned.onnx_b16_gpu0_fp16.engine successfully
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_1:0 3x224x224 min: 1x3x224x224 opt: 16x3x224x224 Max: 16x3x224x224
1 OUTPUT kFLOAT predictions/Softmax:0 6 min: 0 opt: 0 Max: 0
0:01:53.884154875 88135 0xaaab15d34410 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary2-nvinference-engine> [UID 3]: Load new model:dstest2_sgie2_config.txt sucessfully
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:01:53.899319675 88135 0xaaab15d34410 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet_pruned.onnx_b16_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_1:0 3x224x224 min: 1x3x224x224 opt: 16x3x224x224 Max: 16x3x224x224
1 OUTPUT kFLOAT predictions/Softmax:0 20 min: 0 opt: 0 Max: 0
0:01:53.899438142 88135 0xaaab15d34410 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet_pruned.onnx_b16_gpu0_int8.engine
0:01:53.911877619 88135 0xaaab15d34410 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:01:53.989112822 88135 0xaaab15d34410 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0
0:01:53.989210233 88135 0xaaab15d34410 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
0:01:53.995799149 88135 0xaaab15d34410 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest2_pgie_config.txt sucessfully
**PERF: {'stream0': 0.0}
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
**PERF: {'stream0': 2.18}
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
**PERF: {'stream0': 10.6}
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
**PERF: {'stream0': 29.2}
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
**PERF: {'stream0': 30.8}
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
**PERF: {'stream0': 29.8}
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): ../libs/gst/base/gstbasesink.c(3143): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.
"gst_base_sink_is_too_late " means sink drop the buffers because it came too late. you can set sync=false for nv3dsink.
at the first time, the app will convert model to engine. after the first time, you can set the model path, the app will load the engine directly instead of creating an new engine again.
you need to set an int8 calibration file if wanting to run model with network-mode=1(int8). if no calibration file, the app will create fp16 engine instead.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
please refer to this link for performance improvement.
if still can’t work well, could you share the log of “sudo tegrastats” when running the application? wondering if there is performance issue.
if still can’t work well, please simplify the pipeline to check which element cause the issue. for example, removing elements temporarily. if using “src->nvstreammux → sink”, does the issue remain?