Hi, I am trying to run the deepstream-rtsp-in-rtsp-out.
When I am streaming the output stream, there is lot of lag, is there any flag with which we can reduce the lag.
Did you observe the lag with player(such as VLC)?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
how do i pass the autosink to the python program.
Could you please provide complete information as applicable to your setup?
What do you mean “pass the autosink”?
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
So I am trying to run deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py.
Where in gst-launch-1.0 we run the command like this gst-launch-1.0 rtspsrc location=$RTSP_PATH ! rtpjpegdepay ! jpegdec ! nvvidconv ! autovideosink .
So I want to know how to pass this autovideosink using the program. As we can pass the entire gst-launch command using a program.
OK,I think you can learn some basic info about Gstreamer. You can refer the link below:
https://gstreamer.freedesktop.org/documentation/tutorials/basic/dynamic-pipelines.html?gi-language=c
There are three basic steps
1.make an plaugin by gst_element_factory_make
2.add it to the pipeline gst_bin_add_many
,
3.link the plugin to the pipeline gst_element_link_many
.
The concern I have is adding the different arguments to these 3 basic sets, and more if any.
Like:
1 Gst.ElementFactory.make(“filesrc”, “file-source”)
Here the Gst.ElementFactory.make has “filesrc” as input… but what if I want to pass an rtsp stream?
what will be the parameters.
You can refer the link below:
https://gstreamer.freedesktop.org/documentation/rtsp/rtspsrc.html?gi-language=c
We suggest you use the uridecodebin as source, because its compatibility is better.
https://gstreamer.freedesktop.org/documentation/playback/uridecodebin.html?gi-language=c#uridecodebin-page
Error: gst-resource-error-quark: Resource not found. (3): gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline0/GstFileSrc:file-source:
No such file “rtsp://astream”…
When I am running a tao yolov4-tiny model using deepstream
Earlier this file was taking input stream as an "RTSP " stream, don’t know what happened now.
python3 deepstream_tao.py -i rtsp://a/stream0
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating H264 Encoder
Creating H264 rtppay
Playing file rtsp://stream0
Adding elements to Pipeline
Linking elements in the Pipeline
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
Starting pipeline
0:00:01.952906373 2058594 0x3111d80 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/models/yolov4-tiny/yolov4_cspdarknet_tiny_397.etlt_b1_gpu0_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x544x960
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200
0:00:01.983177576 2058594 0x3111d80 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/models/yolov4-tiny/yolov4_cspdarknet_tiny_397.etlt_b1_gpu0_int8.engine
0:00:01.986195554 2058594 0x3111d80 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/configs/yolov4-tiny_tao/pgie_yolov4_tiny_tao_config.txt sucessfully
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:source/GstUDPSrc:udpsrc0:
streaming stopped, reason not-linked (-1)
Where am i doing wrong
python_file.py (12.4 KB)
config_file.txt (2.2 KB)
waiting for your reply!!
It’s wrong to play rtsp source with filesrc plugin. Could you refer our demo code below? You can learn how to use uridecodebin as source.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py
create_source_bin
this was the same python file which takes RTSP-IN, and RTSP-OUT.
You can double check it. In your file attached, you create the source like:source = Gst.ElementFactory.make("filesrc", "file-source")
. But it the rtsp demo, we create the source by:create_source_bin
I am getting the following error.
Starting pipeline
0:00:01.275946206 2965338 0x1f37d60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/yolo.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x512x512
1 OUTPUT kFLOAT boxes 3840x1x4
2 OUTPUT kFLOAT confs 3840x6
ERROR: [TRT]: 3: Cannot find binding of given name: BatchedNMS
0:00:01.309513830 2965338 0x1f37d60 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer ‘BatchedNMS’ in engine
0:00:01.309539127 2965338 0x1f37d60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/yolo.engine
0:00:01.316297365 2965338 0x1f37d60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:yolo_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f6eef6239a0 (GstCapsFeatures at 0x7f6e1003f7a0)>
Mismatch in the number of output buffers.Expected 4 output buffers, detected in the network :2
0:00:01.605222232 2965338 0x1e49180 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:726> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)
So I am trying to run a custom yolo model on deepstream using the deepstream python apps.
Starting pipeline
0:00:01.286246719 2967031 0x2dbd760 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/yolo.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x512x512
1 OUTPUT kFLOAT boxes 3840x1x4
2 OUTPUT kFLOAT confs 3840x6
ERROR: [TRT]: 3: Cannot find binding of given name: BatchedNMS
0:00:01.315672932 2967031 0x2dbd760 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer ‘BatchedNMS’ in engine
0:00:01.315775074 2967031 0x2dbd760 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/yolo.engine
0:00:01.321222578 2967031 0x2dbd760 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:yolo_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f540dc59820 (GstCapsFeatures at 0x7f5338040fa0)>
Segmentation fault (core dumped)
I am getting the following error “ERROR: [TRT]: 3: Cannot find binding of given name: BatchedNMS”
How do i solve this?
Quick reply will be great!
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
This is not the original question. Please open a new topic for new question. This will make it easier for others to refer the relevant questions. Thanks
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.