Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
I set up a RTSP stream server on a local linux machine, the stream rtsp://localhost:8554/mystream can be played by VLC. Yet when I tried running below command, it failed.
apps/deepstream-imagedata-multistream# ./deepstream_imagedata-multistream.py rtsp://localhost:8554/mystream frames
The error messages are:
Frames will be saved in frames
Creating PipelineCreating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating PgieCreating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Atleast one of the sources is live
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to PipelineLinking elements in the Pipeline
Now playing…
1 : rtsp://localhost:8554/mystream
Starting pipeline0:00:00.832362411 3901 0x214dca0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:12.890046450 3901 0x214dca0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /root/deepstream-python-dgpu/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640 min: 1x3x368x640 opt: 1x3x368x640 Max: 1x3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 min: 0 opt: 0 Max: 00:00:12.905516653 3901 0x214dca0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:dstest_imagedata_config.txt sucessfully
Decodebin child added: sourceError: gst-resource-error-quark: Could not write to resource. (10): gstrtspsrc.c(7671): gst_rtspsrc_close (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
> Could not send message. (Received end-of-file)
Exiting app
I tested other RTSP source like rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa, it worked properly.
Anyone can please shed some light?