./deepstream-bodypose2d-app 4 ../../../models/bodypose2d/model.etlt 2345 8554 v4l2:///dev/video0 test.mp4
Request sink_0 pad from streammux
Please reach RTSP with rtsp://ip:8554/ds-out-avc
Now playing: v4l2:///dev/video0
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
0:00:05.491902426 37963 0x559d91ed1a90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/user/workspace/nvidia/deepstream_tao_apps/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 288x384x3 min: 1x288x384x3 opt: 32x288x384x3 Max: 32x288x384x3
1 OUTPUT kFLOAT heatmap_out/BiasAdd:0 36x48x19 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT conv2d_transpose_1/BiasAdd:0 144x192x38 min: 0 opt: 0 Max: 0
0:00:05.610762783 37963 0x559d91ed1a90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/user/workspace/nvidia/deepstream_tao_apps/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine
0:00:05.848245554 37963 0x559d91ed1a90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:../../../configs/bodypose2d_tao/bodypose2d_pgie_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...
In cb_newpad
Failed to link decoderbin src pad to converter sink pad
###Decodebin did not pick nvidia decoder plugin.
ERROR from element source: Internal data stream error.
Error details: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstV4l2Src:source:
streaming stopped, reason not-linked (-1)
Returned, stopping playback
Average fps 0.000233
Totally 0 persons are inferred
Deleting pipeline
You can try format with list: v4l2:///dev/video0 (Assume you have only one USB camera and the index is 0).
From the source code you can see he source-list is parsed by function nvds_parse_source_list and the source is created by function create_source_bin, which uses plugin uridecodebin and it has format v4l2:///* as above, refer the format FAQ here.
GST_DEBUG=3 ./deepstream-bodypose2d-app bodypose2d_app_config.yml
Request sink_0 pad from streammux
total 1 item
group model-config found 0
Now playing: SESSION_MANAGER=local/toyoaki-desktop:@/tmp/.ICE-unix/1831,unix/toyoaki-desktop:/tmp/.ICE-unix/1831
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:04.482589768 69773 0x55b0614ce920 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/toyoaki/workspace/nvidia/deepstream_tao_apps/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 288x384x3 min: 1x288x384x3 opt: 32x288x384x3 Max: 32x288x384x3
1 OUTPUT kFLOAT heatmap_out/BiasAdd:0 36x48x19 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT conv2d_transpose_1/BiasAdd:0 144x192x38 min: 0 opt: 0 Max: 0
0:00:04.666677438 69773 0x55b0614ce920 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/toyoaki/workspace/nvidia/deepstream_tao_apps/models/bodypose2d/model.etlt_b32_gpu0_fp16.engine
0:00:04.859123175 69773 0x55b0614ce920 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:/home/toyoaki/workspace/nvidia/deepstream_tao_apps/configs/bodypose2d_tao/bodypose2d_pgie_config.yml sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks