Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - Tesla T4
• DeepStream Version - Deepstream 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
I want to use RTSP as input instead of a file but getting this error. I checked the URL of my rtsp and it is working. Currently, I am using filesink to save the video as my output but eventually want to output it to Kafka.
deepstream_action_recognition_config.txt:
[action-recognition]
# stream/file source list
# uri-list=file:///root/zaiming/opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_ride_bike.mov;
uri-list=rtsp://{IP}/nphMotionJpeg?Resolution=320x240&Quality=Standard;
# eglglessink settings
display-sync=1
# 0=eglgles display; 1=fakesink
fakesink=0
# <preprocess-config> is the config file path for nvdspreprocess plugin
# <infer-config> is the config file path for nvinfer plugin
# Enable 3D preprocess and inference
preprocess-config=config_preprocess_3d_custom.txt
infer-config=config_infer_primary_3d_action.txt
# Uncomment to enable 2D preprocess and inference
#preprocess-config=config_preprocess_2d_custom.txt
#infer-config=config_infer_primary_2d_action.txt
# nvstreammux settings
muxer-height=720
muxer-width=1280
# nvstreammux batched push timeout in usec
muxer-batch-timeout=40000
# nvmultistreamtiler settings
tiler-height=720
tiler-width=1280
# Log debug level. 0: disabled. 1: debug. 2: verbose.
debug=0
# Enable fps print on screen. 0: disable. 1: enable
enable-fps=1
Error:
root@ip-172-30-1-233:~/zaiming/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-3d-action-recognition# GST_DEUBG=3 /root/zaiming/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-3d-action-recognition/deepstream-3d-action-recognition -c deepstream_action_recognition_config.txt -v
num-sources = 1
Creating video
Now playing: rtsp://{IP}/nphMotionJpeg?Resolution=320x240&Quality=Standard,
0:00:03.076357868 29532 0x55e01852ac00 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/root/zaiming/models/action_recognition/resnet18_3d_rgb_224.etlt_b4_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input_rgb 3x3x224x224 min: 1x3x3x224x224 opt: 4x3x3x224x224 Max: 4x3x3x224x224
1 OUTPUT kFLOAT fc_pred 9 min: 0 opt: 0 Max: 0
0:00:03.131682682 29532 0x55e01852ac00 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /root/zaiming/models/action_recognition/resnet18_3d_rgb_224.etlt_b4_gpu0_fp16.engine
0:00:03.134002420 29532 0x55e01852ac00 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:config_infer_primary_3d_action.txt sucessfully
sequence_image_process.cpp:494, [INFO: CUSTOM_LIB] 3D custom sequence network info(NCSHW), [N: 1, C: 3, S: 3, H: 224, W:224]
sequence_image_process.cpp:522, [INFO: CUSTOM_LIB] Sequence preprocess buffer manager initialized with stride: 1, subsample: 0
sequence_image_process.cpp:526, [INFO: CUSTOM_LIB] SequenceImagePreprocess initialized successfully
Using user provided processing height = 224 and processing width = 224
Decodebin child added: source
Running...
ERROR from element source: Could not open resource for reading and writing.
Error details: gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:preprocess-test-pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Failed to connect. (Generic error)
Returned, stopping playback
sequence_image_process.cpp:586, [INFO: CUSTOM_LIB] SequenceImagePreprocess is deinitializing
Deleting pipeline