Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) RTX 3060
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
• Issue Type( questions, new requirements, bugs) bugs
After installing DeepStream SDK 6.2 with Quickstart Guide, I run " deepstream-app -c configs/deepstream-app/source30_1080p_dec_preprocess_infer-resnet_tiled_display_int8.txt "
And I got an Error like below ------------------------------------------------------------------------------------------------------------
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING
in CUDA C++ Programming Guide
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b60_gpu0_int8.engine open error
0:00:03.933110775 13430 0x555f39df3d60 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b60_gpu0_int8.engine failed
0:00:03.992657063 13430 0x555f39df3d60 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b60_gpu0_int8.engine failed, try rebuild
0:00:03.992670884 13430 0x555f39df3d60 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING
in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1459 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b60_gpu0_int8.engine opened error
0:01:15.230413790 13430 0x555f39df3d60 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b60_gpu0_int8.engine
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING
in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:01:15.298829291 13430 0x555f39df3d60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg) FPS 4 (Avg) FPS 5 (Avg) FPS 6 (Avg) FPS 7 (Avg) FPS 8 (Avg) FPS 9 (Avg) FPS 10 (Avg) FPS 11 (Avg) FPS 12 (Avg) FPS 13 (Avg) FPS 14 (Avg) FPS 15 (Avg) FPS 16 (Avg) FPS 17 (Avg) FPS 18 (Avg) FPS 19 (Avg) FPS 20 (Avg) FPS 21 (Avg) FPS 22 (Avg) FPS 23 (Avg) FPS 24 (Avg) FPS 25 (Avg) FPS 26 (Avg) FPS 27 (Avg) FPS 28 (Avg) FPS 29 (Avg)
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
0:01:15.749347524 13430 0x555f38eea760 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop:<primary_gie> error: Internal data stream error.
0:01:15.749362408 13430 0x555f38eea760 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop:<primary_gie> error: streaming stopped, reason not-negotiated (-4)
ERROR from primary_gie: Internal data stream error.
Debug info: gstnvinfer.cpp(2369): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
streaming stopped, reason not-negotiated (-4)
Quitting
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=2
nvstreammux: Successfully handled EOS for source_id=3
nvstreammux: Successfully handled EOS for source_id=4
nvstreammux: Successfully handled EOS for source_id=5
nvstreammux: Successfully handled EOS for source_id=6
nvstreammux: Successfully handled EOS for source_id=7
nvstreammux: Successfully handled EOS for source_id=8
nvstreammux: Successfully handled EOS for source_id=9
nvstreammux: Successfully handled EOS for source_id=10
nvstreammux: Successfully handled EOS for source_id=11
nvstreammux: Successfully handled EOS for source_id=12
nvstreammux: Successfully handled EOS for source_id=13
nvstreammux: Successfully handled EOS for source_id=14
nvstreammux: Successfully handled EOS for source_id=15
nvstreammux: Successfully handled EOS for source_id=16
nvstreammux: Successfully handled EOS for source_id=17
nvstreammux: Successfully handled EOS for source_id=18
nvstreammux: Successfully handled EOS for source_id=19
nvstreammux: Successfully handled EOS for source_id=20
nvstreammux: Successfully handled EOS for source_id=21
nvstreammux: Successfully handled EOS for source_id=22
nvstreammux: Successfully handled EOS for source_id=23
nvstreammux: Successfully handled EOS for source_id=24
nvstreammux: Successfully handled EOS for source_id=25
nvstreammux: Successfully handled EOS for source_id=26
nvstreammux: Successfully handled EOS for source_id=27
nvstreammux: Successfully handled EOS for source_id=28
nvstreammux: Successfully handled EOS for source_id=29
nvstreammux: Successfully handled EOS for source_id=13
ERROR from tiled_display_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:tiled_display_bin/GstQueue:tiled_display_queue:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=24
nvstreammux: Successfully handled EOS for source_id=12
nvstreammux: Successfully handled EOS for source_id=3
nvstreammux: Successfully handled EOS for source_id=26
nvstreammux: Successfully handled EOS for source_id=27
nvstreammux: Successfully handled EOS for source_id=17
nvstreammux: Successfully handled EOS for source_id=6
nvstreammux: Successfully handled EOS for source_id=8
App run failed