• Hardware Platform (Jetson / GPU): Jetson • DeepStream Version: 6.2 • JetPack Version (valid for Jetson only): 5.1.1-b56 • TensorRT Version: 8.5.2.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs): Concern
Hello,
Currently when I am running deepstream I am getting
(pipeline:2272): GStreamer-WARNING **: 12:53:02.376: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.
(pipeline:2272): GStreamer-WARNING **: 12:53:02.376: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:08.489107705 2272 0x556035cd7100 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/fast-api/inference_base/dino/dino_model_v1.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT inputs 3x544x960
1 OUTPUT kFLOAT pred_logits 900x91
2 OUTPUT kFLOAT pred_boxes 900x4
0:00:08.594079943 2272 0x556035cd7100 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/fast-api/inference_base/dino/dino_model_v1.onnx_b1_gpu0_fp32.engine
0:00:08.599511642 2272 0x556035cd7100 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:./configs/model_config.txt sucessfully
Now playing: (null)
Deepstream Pipeline is Running now...
New file created: file:///opt/nvidia/deepstream/deepstream-6.4/fast-api/tmp/car_long.mp4
Calling Start 0
creating uridecodebin for [file:///opt/nvidia/deepstream/deepstream-6.4/fast-api/tmp/car_long.mp4]
(pipeline:2272): GStreamer-CRITICAL **: 12:53:14.701: gst_mini_object_copy: assertion 'mini_object != NULL' failed
(pipeline:2272): GStreamer-CRITICAL **: 12:53:14.701: gst_mini_object_unref: assertion 'mini_object != NULL' failed
(pipeline:2272): GStreamer-CRITICAL **: 12:53:14.701: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed
(pipeline:2272): GStreamer-CRITICAL **: 12:53:14.702: gst_structure_set_value: assertion 'structure != NULL' failed
(pipeline:2272): GStreamer-CRITICAL **: 12:53:14.702: gst_mini_object_unref: assertion 'mini_object != NULL' failed
decodebin child added source
decodebin child added decodebin0
STATE CHANGE ASYNC
decodebin child added qtdemux0
decodebin child added multiqueue0
decodebin child added h264parse0
decodebin child added capsfilter0
decodebin child added nvv4l2decoder0
decodebin new pad video/x-raw
Decodebin linked to pipeline
nvstreammux: Successfully handled EOS for source_id=0
The last line from the terminal nvstreammux: Successfully handled EOS for source_id=0 is coming from an internal library, May get any information of that library and where can I access it.
Thank you.
If that message appears the tasks that are running are stopped
eg: I tried to write meta data to a text file, if that message appears it stops without completing.
If you could give a solution for this it would be grateful.
that message means nvstreammux received source’s EOS message. which sample are you testing or referring to? you can let application not quit after receiving EOS in bus_call function.
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
as mentioned above, “nvstreammux: Successfully handled EOS for source_id=0” printing is an internal log, which is expected. I am still not clear about the question " when the video finished it stops with this msg". could you use deepstream sample to reproduce this issue?